Full Code of Mohammedcha/gplay-scraper for AI

main 304dcd3d3546 cached
81 files
567.4 KB
153.3k tokens
276 symbols
1 requests
Download .txt
Showing preview only (594K chars total). Download the full file or copy to clipboard to get everything.
Repository: Mohammedcha/gplay-scraper
Branch: main
Commit: 304dcd3d3546
Files: 81
Total size: 567.4 KB

Directory structure:
gitextract_ihudd3jc/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   └── feature_request.md
│   ├── pull_request_template.md
│   └── workflows/
│       ├── docs.yml
│       └── test.yml
├── .gitignore
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README/
│   ├── APP_METHODS.md
│   ├── DEVELOPER_METHODS.md
│   ├── LIST_METHODS.md
│   ├── README.md
│   ├── REVIEWS_METHODS.md
│   ├── SEARCH_METHODS.md
│   ├── SIMILAR_METHODS.md
│   └── SUGGEST_METHODS.md
├── README.md
├── SECURITY.md
├── build_docs.py
├── docs/
│   ├── README.md
│   ├── api/
│   │   ├── app.rst
│   │   ├── developer.rst
│   │   ├── list.rst
│   │   ├── reviews.rst
│   │   ├── search.rst
│   │   ├── similar.rst
│   │   └── suggest.rst
│   ├── conf.py
│   ├── configuration.rst
│   ├── error_handling.rst
│   ├── examples.rst
│   ├── fields.rst
│   ├── index.rst
│   ├── installation.rst
│   ├── quickstart.rst
│   └── requirements.txt
├── examples/
│   ├── README.md
│   ├── app_methods_example.py
│   ├── developer_methods_example.py
│   ├── list_methods_example.py
│   ├── reviews_methods_example.py
│   ├── search_methods_example.py
│   ├── similar_methods_example.py
│   └── suggest_methods_example.py
├── gplay_scraper/
│   ├── __init__.py
│   ├── app.py
│   ├── config.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── gplay_methods.py
│   │   ├── gplay_parser.py
│   │   └── gplay_scraper.py
│   ├── exceptions.py
│   ├── models/
│   │   ├── __init__.py
│   │   └── element_specs.py
│   └── utils/
│       ├── __init__.py
│       ├── constants.py
│       ├── error_handling.py
│       ├── helpers.py
│       └── http_client.py
├── output/
│   ├── app_example.json
│   ├── developer_example.json
│   ├── list_example.json
│   ├── reviews_example.json
│   ├── search_example.json
│   ├── similar_example.json
│   └── suggest_example.json
├── requirements.txt
├── setup.py
└── tests/
    ├── __init__.py
    ├── test_app_methods.py
    ├── test_basic.py
    ├── test_developer_methods.py
    ├── test_list_methods.py
    ├── test_package.py
    ├── test_reviews_methods.py
    ├── test_search_methods.py
    ├── test_similar_methods.py
    └── test_suggest_methods.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: '[BUG] '
labels: bug
assignees: ''
---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Use app ID '...'
2. Call method '....'
3. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Code Example**
```python
from gplay_scraper import GPlayScraper
scraper = GPlayScraper()
# Your code here
```

**Error Output**
```
Paste the full error message here
```

**Environment:**
 - OS: [e.g. Windows 10, macOS, Linux]
 - Python version: [e.g. 3.8.5]
 - Library version: [e.g. 1.0.2]

**Additional context**
Add any other context about the problem here.

================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: '[FEATURE] '
labels: enhancement
assignees: ''
---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Use Case**
Describe how this feature would be used:
```python
# Example of how the new feature would work
scraper = GPlayScraper()
result = scraper.new_method(app_id)
```

**Additional context**
Add any other context or screenshots about the feature request here.

================================================
FILE: .github/pull_request_template.md
================================================
# Pull Request

## Description
Brief description of changes made.

## Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update

## Testing
- [ ] I have tested my changes locally
- [ ] I have added tests for new functionality
- [ ] All existing tests pass

## Code Quality
- [ ] My code follows the project's style guidelines
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation

## Related Issues
Fixes #(issue number)

## Additional Notes
Any additional information about the changes.

================================================
FILE: .github/workflows/docs.yml
================================================
name: Build and Deploy Documentation

on:
  push:
    branches: [ main ]

permissions:
  contents: write

jobs:
  docs:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4
    
    - name: Set up Python
      uses: actions/setup-python@v5
      with:
        python-version: '3.11'
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r docs/requirements.txt
    
    - name: Build documentation
      run: |
        cd docs
        sphinx-build -b html . _build/html
        touch _build/html/.nojekyll
    
    - name: Deploy to GitHub Pages
      uses: peaceiris/actions-gh-pages@v4
      if: github.ref == 'refs/heads/main'
      with:
        github_token: ${{ secrets.GITHUB_TOKEN }}
        publish_dir: ./docs/_build/html
        force_orphan: true

================================================
FILE: .github/workflows/test.yml
================================================
name: Tests

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]
      fail-fast: false

    steps:
    - uses: actions/checkout@v4
    
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v4
      with:
        python-version: ${{ matrix.python-version }}
    
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
        pip install pytest pytest-cov
    
    - name: Run package and basic functionality tests
      run: |
        python -m unittest tests.test_package tests.test_basic -v
    
    - name: Run network-dependent tests (optional)
      continue-on-error: true
      timeout-minutes: 15
      run: |
        echo "Running network-dependent tests with delays (failures expected due to rate limiting)..."
        python -m unittest tests.test_app_methods -v || echo "App methods test completed"
        python -m unittest tests.test_search_methods -v || echo "Search methods test completed"
        python -m unittest tests.test_reviews_methods -v || echo "Reviews methods test completed"
        python -m unittest tests.test_developer_methods -v || echo "Developer methods test completed"
        python -m unittest tests.test_list_methods -v || echo "List methods test completed"
        python -m unittest tests.test_similar_methods -v || echo "Similar methods test completed"
        python -m unittest tests.test_suggest_methods -v || echo "Suggest methods test completed"

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/
!docs/.nojekyll

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
Pipfile.lock

# PEP 582
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# IDE
.vscode/
.idea/
*.swp
*.swo

# OS
.DS_Store
Thumbs.db

# Project specific
backup/
temp/
*.tmp

# Publishing scripts (local use only)
publish_to_github.bat
publish_to_github.sh
publish_to_pypi.bat
publish_to_pypi.sh
update_github.bat
update_github.sh
update_pypi.bat
update_pypi.sh

# Test and debug files
xx.py
x_fallback.py
test_all_methods.py
debug_limbo*.py
limbo_ds5_raw.txt
appbrain_scraper.py

# Documentation folders (local use only)
wiki/
community/

# Chrome extensions
chrome-extension/
chrome-extension-new/
firefox-extensions/
firefox-extension-new/
edge-extensions/
edge-extension-new/
opera-extensions/
opera-extension-new/
webstore-upload/
webstore-upload-new/
webstore-upload-chrome/
webstore-upload-firefox/
webstore-upload-edge/
webstore-upload-opera/
webstore-upload-chrome-new/
webstore-upload-firefox-new/
webstore-upload-edge-new/
webstore-upload-opera-new/
build-extensions/
build-extension/
build-extension-chrome/
build-extension-firefox/
build-extension-edge/
build-extension-opera/
build-extension-chrome-new/
build-extension-firefox-new/
build-extension-edge-new/
build-extension-opera-new/
dist-extensions/
dist-extension/
dist-extension-chrome/
dist-extension-firefox/
dist-extension-edge/
dist-extension-opera/
dist-extension-chrome-new/
dist-extension-firefox-new/
dist-extension-edge-new/
dist-extension-opera-new/
release/
release-chrome/
release-firefox/
release-edge/
release-opera/
release-chrome-new/
release-firefox-new/
release-edge-new/
release-opera-new/
temp-extensions/
temp-extension/
temp-extension-chrome/
temp-extension-firefox/
temp-extension-edge/
temp-extension-opera/
temp-extension-chrome-new/


================================================
FILE: CHANGELOG.md
================================================
# Changelog

All notable changes to this project will be documented in this file.

## [1.0.6] - 2025-11-16

### Bug Fixes

- **Reviews Pagination Fix**: Fixed critical issue when requesting more reviews than available
  - Resolved 'NoneType' object is not subscriptable error
  - Improved token extraction logic for empty review responses
  - Now gracefully returns available reviews instead of crashing
  - Enhanced error handling in ReviewsScraper and ReviewsParser
- **Empty Response Handling**: Better handling of apps with limited reviews
  - Safe bounds checking for pagination tokens
  - Proper null checking for empty data structures
  - Graceful degradation when no more reviews are available

### Acknowledgments

- Thanks to [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) for reporting the reviews pagination bug

## [1.0.5] - 2025-10-18

### New Features

- **Publisher Country Detection**: Added `publisherCountry` field to app data
  - Automatically detects developer's country from phone number and address
  - Uses international phone prefixes and address parsing
  - Returns country names like "United States", "Germany", "Japan", etc.
  - Handles multiple countries when phone and address differ (e.g., "United States/Germany")

### Removed Features

- **Removed updatedTimestamp**: Removed deprecated timestamp field that was causing confusion


### Bug Fixes

- **Enhanced Error Handling**: Improved error handling and retry mechanisms
  - Better HTTP client fallback when requests fail
  - More robust JSON parsing with multiple fallback strategies
  - Improved handling of network timeouts and connection errors
- **Retry Mechanism**: Fixed automatic retry logic for failed requests
  - Exponential backoff for rate limiting
  - Automatic HTTP client switching on failures
  - Better error recovery for temporary network issues
- **General Bug Fixes**: Fixed various edge cases and improved stability
  - Better handling of malformed JSON responses
  - Improved data extraction for apps with missing fields
  - Enhanced Unicode handling for international app data

## [1.0.4] - 2025-10-16

### New Features

- **Assets Parameter**: Added configurable image sizes for all app methods
  - `SMALL` (512px width)
  - `MEDIUM` (1024px width) - Default
  - `LARGE` (2048px width)
  - `ORIGINAL` (Maximum size)
  - Available in all app methods: `app_analyze()`, `app_get_field()`, `app_get_fields()`, `app_print_field()`, `app_print_fields()`, `app_print_all()`
  - Affects icon, headerImage, screenshots, and videoImage URLs

### Bug Fixes

- **Release Date Fallback**: Fixed missing release dates when using language/country parameters
  - Added automatic fallback request without `hl`/`gl` parameters when release date is null
  - Ensures release date extraction for apps in all regions
- **Path Resolution**: Fixed various path-related issues in data extraction
- **Image URL Processing**: Improved image URL formatting with proper size parameters

### Usage Examples

```python
# Use different asset sizes
data = scraper.app_analyze("com.whatsapp", assets="LARGE")
icon = scraper.app_get_field("com.whatsapp", "icon", assets="SMALL")
scraper.app_print_all("com.whatsapp", assets="ORIGINAL")
```

## [1.0.3] - 2025-10-15

### New Features

- **Enhanced Search Pagination**: Now able to fetch unlimited search results (300+) with automatic pagination, not limited to 50 results anymore
- **Improved Search Performance**: Optimized search result fetching with better token handling and batch processing

### Bug Fixes & Code Quality Improvements

- **Code Review**: Addressed security vulnerabilities and code quality issues
- **Error Handling**: Improved error handling patterns across all modules
- **Performance**: Optimized JSON parsing and HTTP client fallback logic
- **Security**: Fixed potential SSRF and injection vulnerabilities
- **Maintainability**: Enhanced code readability and documentation

## [1.0.2] - 2025-01-15

### Major Release - Complete Library Redesign 🚀

This version represents a complete rewrite of GPlay Scraper with a focus on modularity, extensibility, and comprehensive data extraction across all Google Play Store features.

### New Features

#### 7 Method Types with 42 Functions

- **App Methods** - Extract 65+ data fields from any app (ratings, installs, pricing, permissions, screenshots, etc.)
- **Search Methods** - Search Google Play Store apps with comprehensive filtering and pagination
- **Reviews Methods** - Extract user reviews with ratings, timestamps, helpful votes, and detailed feedback
- **Developer Methods** - Get all apps published by a specific developer using developer ID
- **List Methods** - Access top charts (TOP_FREE, TOP_PAID, TOP_GROSSING) by category with 54 categories
- **Similar Methods** - Find similar/competitor apps for market research and competitive analysis
- **Suggest Methods** - Get search suggestions and autocomplete for ASO keyword research

Each method type includes 6 functions:
- `analyze()` - Get all data as dictionary/list
- `get_field()` - Get single field value
- `get_fields()` - Get multiple fields as dictionary
- `print_field()` - Print single field to console
- `print_fields()` - Print multiple fields to console
- `print_all()` - Print all data as formatted JSON

#### 7 HTTP Clients with Automatic Fallback

- **requests** (default) - Standard Python HTTP library, reliable and well-tested
- **curl_cffi** - Browser impersonation with TLS fingerprinting, best for avoiding detection
- **tls_client** - Custom TLS fingerprinting, good for bypassing restrictions
- **httpx** - Modern async-capable HTTP client with HTTP/2 support
- **urllib3** - Low-level HTTP client with connection pooling
- **cloudscraper** - Cloudflare bypass capabilities
- **aiohttp** - Async HTTP client for high-performance concurrent requests

Automatic fallback system tries clients in order until one succeeds, ensuring maximum reliability.

#### Multi-Language & Multi-Region Support

- Support for 100+ languages (en, es, fr, de, ja, ko, zh, ar, etc.)
- Support for 150+ countries (us, gb, ca, au, in, br, jp, etc.)
- Get localized app data, reviews, and search results
- Region-specific pricing and availability information

#### Comprehensive Data Extraction

- **65+ App Fields**: title, developer, ratings, installs, price, screenshots, permissions, release date, update date, size, version, content rating, privacy policy, and more
- **Review Data**: user name, rating, review text, timestamp, app version, helpful votes, developer reply
- **Search Results**: app ID, title, developer, rating, price, icon, screenshots, description snippet
- **Developer Portfolio**: all apps from a developer with complete metadata
- **Top Charts**: ranked lists with install counts, ratings, and trending data
- **Similar Apps**: competitor analysis with relevance scoring
- **Search Suggestions**: popular keywords and autocomplete terms

#### Enhanced Architecture

- **Modular Design**: Separate classes for methods, scrapers, and parsers
- **Core Modules**: `gplay_methods.py`, `gplay_scraper.py`, `gplay_parser.py`
- **HTTP Client Abstraction**: `HttpClient` class with pluggable client support
- **Element Specs**: Reusable CSS selector specifications for data extraction
- **Helper Utilities**: Text processing, date parsing, JSON cleaning, age calculation
- **Exception Hierarchy**: 6 custom exception types for specific error scenarios

#### Documentation & Testing

- **Comprehensive Docstrings**: All 42 methods, 7 scrapers, 7 parsers, and utility functions documented
- **Sphinx Documentation**: Professional HTML documentation with examples, API reference, and guides
- **HTTP Clients Guide**: Detailed documentation on when and how to use each HTTP client
- **Fields Reference**: Complete reference of all 65+ fields, categories, and parameters
- **Unit Tests**: Complete test coverage for all 7 method types
- **Examples**: Real-world usage examples for each method type

#### Configuration & Customization

- **Configurable Parameters**: Language, country, count, sort order, collection type
- **Rate Limiting**: Built-in delays to prevent blocking (configurable)
- **Error Handling**: Graceful fallbacks and informative error messages
- **Logging**: Detailed logging for debugging and monitoring
- **Timeout Control**: Configurable request timeouts
- **Retry Logic**: Automatic retries with exponential backoff

### Breaking Changes

- Complete API redesign - not backward compatible with v1.0.1
- Method names changed from `get_app_details()` to `app_analyze()`
- New parameter structure for all methods
- HTTP client must be specified or uses automatic fallback
- Exception types renamed and reorganized

### Migration Guide

Old (v1.0.1):
```python
scraper = GPlayScraper()
data = scraper.get_app_details("com.whatsapp")
```

New (v1.0.2):
```python
scraper = GPlayScraper()
data = scraper.app_analyze("com.whatsapp")
```

### Performance Improvements

- Faster JSON parsing with optimized regex patterns
- Reduced memory usage with streaming parsers
- Better caching of HTTP client instances
- Parallel request support with async clients

### Bug Fixes

- Fixed JSON parsing for apps with special characters in descriptions
- Fixed review extraction for apps with no reviews
- Fixed developer ID extraction from developer pages
- Fixed category parsing for apps in multiple categories
- Fixed price parsing for apps with regional pricing
- Fixed screenshot URL extraction for apps with video previews

## [1.0.1] - 2025-10-07

### Added
- **Paid App Support**: Fixed JSON parsing issues for paid apps with malformed data structures
- **Reviews Extraction**: Successfully extracts user reviews for both free and paid apps
- **Organized Output**: Restructured JSON output with logical field grouping:
  - Basic Information
  - Category & Genre
  - Release & Updates
  - Media Content
  - Install Statistics
  - Ratings & Reviews
  - Advertising
  - Technical Details
  - Content Rating
  - Privacy & Security
  - Pricing & Monetization
  - Developer Information
  - ASO Analysis
- **Enhanced JSON Parser**: Bracket-matching algorithm for complex nested structures
- **Original Price Field**: Added `originalPrice` field for sale price tracking

### Fixed
- **JSON Parsing Errors**: Resolved "Expecting ',' delimiter" errors for paid apps
- **Reviews Data**: Fixed empty reviews arrays by implementing alternative parsing methods
- **Malformed Data Handling**: Improved handling of unquoted keys and malformed JSON from Play Store

### Improved
- **Error Handling**: Better fallback mechanisms for JSON parsing failures
- **Data Extraction**: More robust extraction for apps with complex pricing structures
- **Code Organization**: Cleaner separation of parsing logic and error recovery

## [1.0.0] - 2025-10-06

### Added
- Initial release of GPlay Scraper
- Complete Google Play Store app data extraction
- ASO (App Store Optimization) analysis
- Modular architecture with separate core modules
- Support for 60+ data fields including:
  - Basic app information
  - Install statistics and metrics
  - Ratings and reviews data
  - Technical specifications
  - Developer information
  - Media content (screenshots, videos, icons)
  - Pricing and monetization details
  - ASO keyword analysis
- Multiple access methods:
  - `analyze()` - Complete app analysis
  - `get_field()` - Single field retrieval
  - `get_fields()` - Multiple field retrieval
  - `print_field()` - Direct field printing
  - `print_fields()` - Multiple field printing
  - `print_all()` - Complete data printing
- Comprehensive documentation and examples
- Error handling and logging
- Rate limiting considerations
- Cross-platform compatibility

### Features
- Web scraping of Google Play Store pages
- JSON data extraction and parsing
- Automatic install metrics calculation
- Keyword frequency analysis
- Readability scoring
- Review data extraction
- Image URL processing
- Date parsing and age calculation

================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our
community include:

* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
  and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
  overall community

Examples of unacceptable behavior include:

* The use of sexualized language or imagery, and sexual attention or
  advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
  address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
  professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.

## Scope

This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement through GitHub Issues.

All complaints will be reviewed and investigated promptly and fairly.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

[homepage]: https://www.contributor-covenant.org

================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to GPlay Scraper

Thank you for your interest in contributing! 

## Development Setup

1. Fork the repository
2. Clone your fork: `git clone https://github.com/yourusername/gplay-scraper.git`
3. Install in development mode: `pip install -e .`
4. Install dev dependencies: `pip install pytest`

## Running Tests

```bash
python -m pytest tests/ -v
```

## Code Style

- Follow PEP 8
- Add docstrings to new functions
- Include type hints where appropriate

## Submitting Changes

1. Create a feature branch: `git checkout -b feature-name`
2. Make your changes
3. Add tests for new functionality
4. Run tests to ensure they pass
5. Submit a pull request

## Reporting Issues

Please use GitHub Issues to report bugs or request features.

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2025 Mohammed Cha

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: MANIFEST.in
================================================
include README.md
include LICENSE
include requirements.txt
include CHANGELOG.md
include CONTRIBUTING.md
include SECURITY.md
recursive-include examples *.py
recursive-include tests *.py

================================================
FILE: README/APP_METHODS.md
================================================
# App Methods

Extract detailed information about individual Google Play Store apps.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get all data
data = scraper.app_analyze("com.whatsapp")
print(data['title'], data['score'], data['installs'])

# Get specific fields
title = scraper.app_get_field("com.whatsapp", "title")
print(title)  # WhatsApp Messenger

# Get multiple fields
info = scraper.app_get_fields("com.whatsapp", ["title", "score", "developer"])
print(info)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `app_analyze(app_id, lang='en', country='us', assets=None)`
Returns all 65+ fields as a dictionary.

```python
data = scraper.app_analyze("com.whatsapp")
# Returns: {'appId': 'com.whatsapp', 'title': 'WhatsApp Messenger', ...}

# With custom image sizes
data = scraper.app_analyze("com.whatsapp", assets="LARGE")
# Returns same data but with larger image URLs (2048px)
```

### `app_get_field(app_id, field, lang='en', country='us', assets=None)`
Returns a single field value.

```python
score = scraper.app_get_field("com.whatsapp", "score")
# Returns: 4.2

# Get high-quality icon
icon = scraper.app_get_field("com.whatsapp", "icon", assets="ORIGINAL")
# Returns: URL with maximum image quality
```

### `app_get_fields(app_id, fields, lang='en', country='us', assets=None)`
Returns multiple fields as a dictionary.

```python
data = scraper.app_get_fields("com.whatsapp", ["title", "score", "installs"])
# Returns: {'title': 'WhatsApp Messenger', 'score': 4.2, 'installs': '5,000,000,000+'}

# Get media with custom sizes
media = scraper.app_get_fields("com.whatsapp", ["icon", "screenshots"], assets="SMALL")
# Returns: Media URLs with 512px width
```

### `app_print_field(app_id, field, lang='en', country='us', assets=None)`
Prints a single field to console.

```python
scraper.app_print_field("com.whatsapp", "title")
# Output: title: WhatsApp Messenger

# Print large icon URL
scraper.app_print_field("com.whatsapp", "icon", assets="LARGE")
# Output: icon: https://...=w2048
```

### `app_print_fields(app_id, fields, lang='en', country='us', assets=None)`
Prints multiple fields to console.

```python
scraper.app_print_fields("com.whatsapp", ["title", "score"])
# Output:
# title: WhatsApp Messenger
# score: 4.2

# Print media with original quality
scraper.app_print_fields("com.whatsapp", ["icon", "screenshots"], assets="ORIGINAL")
# Output: URLs with maximum image quality
```

### `app_print_all(app_id, lang='en', country='us', assets=None)`
Prints all fields as formatted JSON.

```python
scraper.app_print_all("com.whatsapp")
# Output: Full JSON with all 65+ fields

# Print with high-quality images
scraper.app_print_all("com.whatsapp", assets="LARGE")
# Output: Full JSON with 2048px image URLs
```

---

## Available Fields (65+)

### Basic Information
- `appId` - Package name (e.g., "com.whatsapp")
- `title` - App name
- `summary` - Short description
- `description` - Full description
- `appUrl` - Play Store URL

### Ratings & Reviews
- `score` - Average rating (1-5)
- `ratings` - Total number of ratings
- `reviews` - Total number of reviews
- `histogram` - Rating distribution [1★, 2★, 3★, 4★, 5★]

### Install Metrics
- `installs` - Install range (e.g., "10,000,000+")
- `minInstalls` - Minimum installs
- `realInstalls` - Estimated real installs
- `dailyInstalls` - Estimated daily installs
- `monthlyInstalls` - Estimated monthly installs
- `minDailyInstalls` - Minimum daily installs
- `realDailyInstalls` - Real estimated daily installs
- `minMonthlyInstalls` - Minimum monthly installs
- `realMonthlyInstalls` - Real estimated monthly installs

### Pricing
- `price` - Price in currency (0 if free)
- `currency` - Currency code (e.g., "USD")
- `free` - Boolean, true if free
- `offersIAP` - Has in-app purchases
- `inAppProductPrice` - IAP price range
- `sale` - Currently on sale
- `originalPrice` - Original price if on sale

### Media
- `icon` - App icon URL
- `headerImage` - Header image URL
- `screenshots` - List of screenshot URLs
- `video` - Promo video URL
- `videoImage` - Video thumbnail URL

### Developer
- `developer` - Developer name
- `developerId` - Developer ID
- `developerEmail` - Contact email
- `developerWebsite` - Website URL
- `developerAddress` - Physical address
- `developerPhone` - Contact phone
- `privacyPolicy` - Privacy policy URL
- `publisherCountry` - Developer's country

### Category
- `genre` - Primary category (e.g., "Communication")
- `genreId` - Category ID (e.g., "COMMUNICATION")
- `categories` - List of categories

### Technical
- `version` - Current version
- `androidVersion` - Required Android version
- `minAndroidApi` - Minimum API level
- `maxAndroidApi` - Maximum API level
- `appBundle` - App bundle name

### Dates
- `released` - Release date (e.g., "Feb 24, 2009")
- `appAgeDays` - Age in days
- `lastUpdated` - Last update date

### Content
- `contentRating` - Age rating (e.g., "Everyone")
- `contentRatingDescription` - Rating description
- `whatsNew` - Recent changes list
- `permissions` - Required permissions dict
- `dataSafety` - Data safety info list

### Advertising
- `adSupported` - Contains ads
- `containsAds` - Shows advertisements

### Availability
- `available` - App is available

---

## Practical Examples

### Competitive Analysis
```python
apps = ["com.whatsapp", "com.telegram", "com.viber"]
for app_id in apps:
    data = scraper.app_get_fields(app_id, ["title", "score", "realInstalls"])
    print(f"{data['title']}: {data['score']}★ - {data['realInstalls']:,} installs")
```

### Monitor App Updates
```python
app_id = "com.whatsapp"
data = scraper.app_get_fields(app_id, ["version", "lastUpdated", "whatsNew"])
print(f"Version: {data['version']}")
print(f"Updated: {data['lastUpdated']}")
print(f"Changes: {data['whatsNew']}")
```

### Extract Developer Info
```python
app_id = "com.whatsapp"
dev_info = scraper.app_get_fields(app_id, [
    "developer", "developerEmail", "developerWebsite"
])
print(dev_info)
```

### Get High-Quality Media
```python
app_id = "com.whatsapp"
# Get original quality images
media = scraper.app_get_fields(app_id, ["icon", "screenshots"], assets="ORIGINAL")
print(f"Icon: {media['icon']}")  # Maximum quality
print(f"Screenshots: {len(media['screenshots'])} images")

# Get small thumbnails for faster loading
thumbnails = scraper.app_get_fields(app_id, ["icon", "headerImage"], assets="SMALL")
print(f"Small icon: {thumbnails['icon']}")  # 512px
```

### Check Monetization
```python
app_id = "com.whatsapp"
money = scraper.app_get_fields(app_id, [
    "free", "price", "offersIAP", "containsAds"
])
print(f"Free: {money['free']}")
print(f"Has IAP: {money['offersIAP']}")
print(f"Has Ads: {money['containsAds']}")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `app_id` (str, required) - App package name from Play Store URL
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `assets` (str, optional) - Image size: 'SMALL', 'MEDIUM', 'LARGE', 'ORIGINAL' (default: 'MEDIUM')
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Assets Parameter (Image Sizes)
- **SMALL** - 512px width (`w512`)
- **MEDIUM** - 1024px width (`w1024`) - Default
- **LARGE** - 2048px width (`w2048`)
- **ORIGINAL** - Maximum size (`w9999`)

Affects these fields: `icon`, `headerImage`, `screenshots`, `videoImage`

```python
# Different image qualities
small_icon = scraper.app_get_field("com.whatsapp", "icon", assets="SMALL")
# Returns: https://...=w512

large_icon = scraper.app_get_field("com.whatsapp", "icon", assets="LARGE")
# Returns: https://...=w2048

original_icon = scraper.app_get_field("com.whatsapp", "icon", assets="ORIGINAL")
# Returns: https://...=w9999
```

### Finding App IDs
From Play Store URL: `https://play.google.com/store/apps/details?id=com.whatsapp`  
The app_id is: `com.whatsapp`

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`app_analyze()`** - Need all data for comprehensive analysis
- **`app_get_field()`** - Need just one specific value
- **`app_get_fields()`** - Need several specific fields (more efficient than multiple get_field calls)
- **`app_print_field()`** - Quick debugging/console output
- **`app_print_fields()`** - Quick debugging of multiple values
- **`app_print_all()`** - Explore available data structure

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    data = scraper.app_analyze("invalid.app.id")
except AppNotFoundError:
    print("App not found")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Data
```python
# Get data from different regions
us_data = scraper.app_analyze("com.whatsapp", country="us")
uk_data = scraper.app_analyze("com.whatsapp", country="gb")
jp_data = scraper.app_analyze("com.whatsapp", country="jp", lang="ja")
```


================================================
FILE: README/DEVELOPER_METHODS.md
================================================
# Developer Methods

Get all apps published by a specific developer on Google Play Store.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get all apps from a developer
apps = scraper.developer_analyze("5700313618786177705")
for app in apps:
    print(f"{app['title']}: {app['score']}★")

# Get specific fields
titles = scraper.developer_get_field("5700313618786177705", "title")
print(titles)

# Get multiple fields
apps = scraper.developer_get_fields("5700313618786177705", ["title", "score", "free"])
print(apps)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `developer_analyze(dev_id, count=100, lang='en', country='us')`
Returns all apps from a developer as a list of dictionaries.

```python
apps = scraper.developer_analyze("5700313618786177705", count=50)
# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]
```

### `developer_get_field(dev_id, field, count=100, lang='en', country='us')`
Returns a specific field from all developer apps.

```python
titles = scraper.developer_get_field("5700313618786177705", "title")
# Returns: ['App 1', 'App 2', 'App 3', ...]
```

### `developer_get_fields(dev_id, fields, count=100, lang='en', country='us')`
Returns multiple fields from all developer apps.

```python
apps = scraper.developer_get_fields("5700313618786177705", ["title", "score", "free"])
# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]
```

### `developer_print_field(dev_id, field, count=100, lang='en', country='us')`
Prints a specific field from all developer apps.

```python
scraper.developer_print_field("5700313618786177705", "title")
# Output:
# 1. title: App 1
# 2. title: App 2
# 3. title: App 3
```

### `developer_print_fields(dev_id, fields, count=100, lang='en', country='us')`
Prints multiple fields from all developer apps.

```python
scraper.developer_print_fields("5700313618786177705", ["title", "score"])
# Output:
# 1. title: App 1, score: 4.5
# 2. title: App 2, score: 4.2
```

### `developer_print_all(dev_id, count=100, lang='en', country='us')`
Prints all data for all developer apps as formatted JSON.

```python
scraper.developer_print_all("5700313618786177705")
# Output: Full JSON array with all apps
```

---

## Available Fields

- `appId` - App package name (e.g., "com.example.app")
- `title` - App name
- `description` - App description
- `icon` - App icon URL
- `url` - Play Store URL
- `developer` - Developer name
- `score` - Average rating (1-5)
- `scoreText` - Rating as text (e.g., "4.5")
- `currency` - Price currency (e.g., "USD")
- `price` - App price (0 if free)
- `free` - Boolean, true if free

---

## Practical Examples

### Analyze Developer Portfolio
```python
dev_id = "5700313618786177705"
apps = scraper.developer_analyze(dev_id)

print(f"Total apps: {len(apps)}")
print(f"Average rating: {sum(a['score'] for a in apps if a['score']) / len(apps):.2f}")
print(f"Free apps: {sum(1 for a in apps if a['free'])}")
print(f"Paid apps: {sum(1 for a in apps if not a['free'])}")
```

### Find Top-Rated Apps
```python
dev_id = "5700313618786177705"
apps = scraper.developer_get_fields(dev_id, ["title", "score"])

# Sort by rating
top_apps = sorted(apps, key=lambda x: x['score'] or 0, reverse=True)[:5]
for i, app in enumerate(top_apps, 1):
    print(f"{i}. {app['title']}: {app['score']}★")
```

### Compare Free vs Paid Apps
```python
dev_id = "5700313618786177705"
apps = scraper.developer_get_fields(dev_id, ["title", "free", "price", "score"])

free_apps = [a for a in apps if a['free']]
paid_apps = [a for a in apps if not a['free']]

print(f"Free apps: {len(free_apps)} (avg rating: {sum(a['score'] or 0 for a in free_apps)/len(free_apps):.2f})")
print(f"Paid apps: {len(paid_apps)} (avg rating: {sum(a['score'] or 0 for a in paid_apps)/len(paid_apps):.2f})")
```

### Export Developer Apps
```python
import json

dev_id = "5700313618786177705"
apps = scraper.developer_analyze(dev_id)

with open('developer_apps.json', 'w') as f:
    json.dump(apps, f, indent=2)

print(f"Exported {len(apps)} apps to developer_apps.json")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `dev_id` (str, required) - Developer ID (numeric or string)
- `count` (int, optional) - Maximum number of apps to return (default: 100)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Finding Developer IDs

**Method 1: From Developer Page URL**
- Numeric ID: `https://play.google.com/store/apps/dev?id=5700313618786177705`
  - Developer ID: `5700313618786177705`

- String ID: `https://play.google.com/store/apps/developer?id=Google+LLC`
  - Developer ID: `Google+LLC` or `Google LLC`

**Method 2: From App Page**
1. Go to any app by the developer
2. Click on the developer name
3. Extract ID from the URL

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`developer_analyze()`** - Need complete data for all apps
- **`developer_get_field()`** - Need just one field from all apps
- **`developer_get_fields()`** - Need specific fields from all apps (more efficient)
- **`developer_print_field()`** - Quick debugging/console output
- **`developer_print_fields()`** - Quick debugging of multiple fields
- **`developer_print_all()`** - Explore available data structure

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    apps = scraper.developer_analyze("invalid_dev_id")
except AppNotFoundError:
    print("Developer not found")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Data
```python
# Get developer apps from different regions
us_apps = scraper.developer_analyze("5700313618786177705", country="us")
uk_apps = scraper.developer_analyze("5700313618786177705", country="gb")
jp_apps = scraper.developer_analyze("5700313618786177705", country="jp", lang="ja")
```

### Pagination
```python
# Get first 50 apps
apps_batch1 = scraper.developer_analyze("5700313618786177705", count=50)

# Get more apps (library handles this automatically up to count limit)
apps_all = scraper.developer_analyze("5700313618786177705", count=200)
```


================================================
FILE: README/LIST_METHODS.md
================================================
# List Methods

Get top charts from Google Play Store (top free, top paid, top grossing).

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get top free apps
top_free = scraper.list_analyze("TOP_FREE", "GAME", count=50)
for app in top_free[:10]:
    print(f"{app['title']}: {app['installs']} installs")

# Get specific fields
titles = scraper.list_get_field("TOP_FREE", "title", "APPLICATION")
print(titles)

# Get multiple fields
apps = scraper.list_get_fields("TOP_PAID", ["title", "price", "score"], "GAME")
print(apps)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `list_analyze(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')`
Returns top chart apps as a list of dictionaries.

```python
apps = scraper.list_analyze("TOP_FREE", "GAME", count=50)
# Returns: [{'appId': '...', 'title': '...', 'installs': '...', ...}, ...]
```

### `list_get_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')`
Returns a specific field from all chart apps.

```python
titles = scraper.list_get_field("TOP_FREE", "title", "APPLICATION")
# Returns: ['App 1', 'App 2', 'App 3', ...]
```

### `list_get_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')`
Returns multiple fields from all chart apps.

```python
apps = scraper.list_get_fields("TOP_PAID", ["title", "price", "score"], "GAME")
# Returns: [{'title': 'App 1', 'price': 4.99, 'score': 4.5}, ...]
```

### `list_print_field(collection, field, category='APPLICATION', count=100, lang='en', country='us')`
Prints a specific field from all chart apps.

```python
scraper.list_print_field("TOP_FREE", "title", "APPLICATION", count=20)
# Output:
# 1. title: App 1
# 2. title: App 2
# 3. title: App 3
```

### `list_print_fields(collection, fields, category='APPLICATION', count=100, lang='en', country='us')`
Prints multiple fields from all chart apps.

```python
scraper.list_print_fields("TOP_FREE", ["title", "score"], "GAME", count=20)
# Output:
# 1. title: App 1, score: 4.5
# 2. title: App 2, score: 4.2
```

### `list_print_all(collection='TOP_FREE', category='APPLICATION', count=100, lang='en', country='us')`
Prints all data for all chart apps as formatted JSON.

```python
scraper.list_print_all("TOP_FREE", "GAME", count=50)
# Output: Full JSON array with all apps
```

---

## Available Fields

- `appId` - App package name (e.g., "com.example.app")
- `title` - App name
- `description` - App description
- `icon` - App icon URL
- `screenshots` - List of screenshot URLs
- `url` - Play Store URL
- `developer` - Developer name
- `genre` - App category
- `score` - Average rating (1-5)
- `scoreText` - Rating as text (e.g., "4.5")
- `installs` - Install count (e.g., "10,000,000+")
- `currency` - Price currency (e.g., "USD")
- `price` - App price (0 if free)
- `free` - Boolean, true if free

---

## Collection Types

### Available Collections
- **`TOP_FREE`** - Top free apps (most popular free apps)
- **`TOP_PAID`** - Top paid apps (most popular paid apps)
- **`TOP_GROSSING`** - Top grossing apps (highest revenue apps)

---

## Categories

### App Categories (36)
- `APPLICATION` - All apps (default)
- `ANDROID_WEAR` - Android Wear apps
- `ART_AND_DESIGN` - Art & design
- `AUTO_AND_VEHICLES` - Auto & vehicles
- `BEAUTY` - Beauty
- `BOOKS_AND_REFERENCE` - Books & reference
- `BUSINESS` - Business
- `COMICS` - Comics
- `COMMUNICATION` - Communication
- `DATING` - Dating
- `EDUCATION` - Education
- `ENTERTAINMENT` - Entertainment
- `EVENTS` - Events
- `FINANCE` - Finance
- `FOOD_AND_DRINK` - Food & drink
- `HEALTH_AND_FITNESS` - Health & fitness
- `HOUSE_AND_HOME` - House & home
- `LIBRARIES_AND_DEMO` - Libraries & demo
- `LIFESTYLE` - Lifestyle
- `MAPS_AND_NAVIGATION` - Maps & navigation
- `MEDICAL` - Medical
- `MUSIC_AND_AUDIO` - Music & audio
- `NEWS_AND_MAGAZINES` - News & magazines
- `PARENTING` - Parenting
- `PERSONALIZATION` - Personalization
- `PHOTOGRAPHY` - Photography
- `PRODUCTIVITY` - Productivity
- `SHOPPING` - Shopping
- `SOCIAL` - Social
- `SPORTS` - Sports
- `TOOLS` - Tools
- `TRAVEL_AND_LOCAL` - Travel & local
- `VIDEO_PLAYERS` - Video players & editors
- `WATCH_FACE` - Watch faces
- `WEATHER` - Weather
- `FAMILY` - Family

### Game Categories (18)
- `GAME` - All games
- `GAME_ACTION` - Action games
- `GAME_ADVENTURE` - Adventure games
- `GAME_ARCADE` - Arcade games
- `GAME_BOARD` - Board games
- `GAME_CARD` - Card games
- `GAME_CASINO` - Casino games
- `GAME_CASUAL` - Casual games
- `GAME_EDUCATIONAL` - Educational games
- `GAME_MUSIC` - Music games
- `GAME_PUZZLE` - Puzzle games
- `GAME_RACING` - Racing games
- `GAME_ROLE_PLAYING` - Role playing games
- `GAME_SIMULATION` - Simulation games
- `GAME_SPORTS` - Sports games
- `GAME_STRATEGY` - Strategy games
- `GAME_TRIVIA` - Trivia games
- `GAME_WORD` - Word games

---

## Practical Examples

### Top Free Games Analysis
```python
top_games = scraper.list_analyze("TOP_FREE", "GAME", count=100)

print(f"Total games: {len(top_games)}")
print(f"Average rating: {sum(a['score'] for a in top_games if a['score']) / len(top_games):.2f}")
print(f"\nTop 5 games:")
for i, game in enumerate(top_games[:5], 1):
    print(f"{i}. {game['title']} - {game['score']}★ - {game['installs']} installs")
```

### Compare Free vs Paid Apps
```python
top_free = scraper.list_get_fields("TOP_FREE", ["title", "score", "installs"], "APPLICATION", count=50)
top_paid = scraper.list_get_fields("TOP_PAID", ["title", "score", "price"], "APPLICATION", count=50)

free_avg = sum(a['score'] or 0 for a in top_free) / len(top_free)
paid_avg = sum(a['score'] or 0 for a in top_paid) / len(top_paid)

print(f"Top Free Apps - Avg Rating: {free_avg:.2f}")
print(f"Top Paid Apps - Avg Rating: {paid_avg:.2f}")
```

### Find Highest Grossing Apps
```python
top_grossing = scraper.list_get_fields("TOP_GROSSING", ["title", "developer", "genre"], "APPLICATION", count=20)

print("Top 10 Highest Grossing Apps:")
for i, app in enumerate(top_grossing[:10], 1):
    print(f"{i}. {app['title']} by {app['developer']} ({app['genre']})")
```

### Category Comparison
```python
categories = ["GAME", "SOCIAL", "PRODUCTIVITY", "ENTERTAINMENT"]

for category in categories:
    apps = scraper.list_get_fields("TOP_FREE", ["title", "score"], category, count=10)
    avg_score = sum(a['score'] or 0 for a in apps) / len(apps)
    print(f"{category}: {avg_score:.2f}★ average")
```

### Game Genre Analysis
```python
game_genres = ["GAME_ACTION", "GAME_PUZZLE", "GAME_CASUAL", "GAME_STRATEGY"]

for genre in game_genres:
    games = scraper.list_get_fields("TOP_FREE", ["title", "score", "installs"], genre, count=5)
    print(f"\n{genre}:")
    for i, game in enumerate(games, 1):
        print(f"  {i}. {game['title']} - {game['score']}★")
```

### Export Top Charts
```python
import json

top_free = scraper.list_analyze("TOP_FREE", "GAME", count=100)

with open('top_free_games.json', 'w') as f:
    json.dump(top_free, f, indent=2)

print(f"Exported {len(top_free)} games to top_free_games.json")
```

### Track Chart Positions
```python
import time
import json
from datetime import datetime

def track_charts():
    snapshot = {
        "timestamp": datetime.now().isoformat(),
        "top_free": scraper.list_get_fields("TOP_FREE", ["title", "score"], "GAME", count=10),
        "top_paid": scraper.list_get_fields("TOP_PAID", ["title", "price"], "GAME", count=10)
    }
    
    with open(f'charts_{datetime.now().strftime("%Y%m%d")}.json', 'w') as f:
        json.dump(snapshot, f, indent=2)
    
    print(f"Snapshot saved at {snapshot['timestamp']}")

track_charts()
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `collection` (str) - Chart type: "TOP_FREE", "TOP_PAID", "TOP_GROSSING" (default: "TOP_FREE")
- `category` (str, optional) - Category filter (default: "APPLICATION")
- `count` (int, optional) - Maximum number of apps to return (default: 100)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`list_analyze()`** - Need complete data for all chart apps
- **`list_get_field()`** - Need just one field from all apps
- **`list_get_fields()`** - Need specific fields from all apps (more efficient)
- **`list_print_field()`** - Quick debugging/console output
- **`list_print_fields()`** - Quick debugging of multiple fields
- **`list_print_all()`** - Explore available data structure

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    apps = scraper.list_analyze("INVALID_COLLECTION", "GAME")
except AppNotFoundError:
    print("Collection not found")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Charts
```python
# Get charts from different regions
us_charts = scraper.list_analyze("TOP_FREE", "GAME", country="us")
uk_charts = scraper.list_analyze("TOP_FREE", "GAME", country="gb")
jp_charts = scraper.list_analyze("TOP_FREE", "GAME", country="jp", lang="ja")

print(f"US Top Game: {us_charts[0]['title']}")
print(f"UK Top Game: {uk_charts[0]['title']}")
print(f"JP Top Game: {jp_charts[0]['title']}")
```

### Batch Analysis
```python
# Analyze multiple collections at once
collections = ["TOP_FREE", "TOP_PAID", "TOP_GROSSING"]
results = {}

for collection in collections:
    apps = scraper.list_get_fields(collection, ["title", "score"], "GAME", count=10)
    results[collection] = apps
    print(f"{collection}: {len(apps)} apps retrieved")
```


================================================
FILE: README/README.md
================================================
# GPlay Scraper Documentation

Complete documentation for all 7 method types in GPlay Scraper.

## 📚 Method Documentation

### [App Methods](APP_METHODS.md)
Extract comprehensive app data with 65+ fields including ratings, installs, pricing, screenshots, permissions, and technical details.

**Key Features:**
- 65+ data fields per app
- Basic info, ratings, installs, pricing
- Media content (screenshots, videos, icons)
- Technical specs (version, size, Android version)
- Developer information and contact details

**Use Cases:** App analysis, competitive research, market intelligence, data collection

---

### [Search Methods](SEARCH_METHODS.md)
Search Google Play Store apps by keyword with filtering and pagination.

**Key Features:**
- Search by keyword, app name, or category
- Filter and paginate results
- Get app titles, developers, ratings, prices
- Multi-language and multi-region support

**Use Cases:** App discovery, market research, competitor analysis, trend tracking

---

### [Reviews Methods](REVIEWS_METHODS.md)
Extract user reviews with ratings, timestamps, and detailed feedback for sentiment analysis.

**Key Features:**
- Get reviews with ratings (1-5 stars)
- Review text, timestamps, app versions
- Reviewer names and helpful vote counts
- Sort by newest, relevant, or highest rated

**Use Cases:** Sentiment analysis, user feedback, app improvement, competitive monitoring

---

### [Developer Methods](DEVELOPER_METHODS.md)
Get all apps published by a specific developer using their developer ID.

**Key Features:**
- Complete app portfolio for any developer
- Track developer's app performance
- Analyze ratings and install counts
- Monitor developer's market presence

**Use Cases:** Developer research, portfolio analysis, competitive intelligence, market tracking

---

### [List Methods](LIST_METHODS.md)
Access Google Play Store top charts including top free, top paid, and top grossing apps by category.

**Key Features:**
- Top free, top paid, top grossing charts
- 54 categories (36 app + 18 game)
- Ranked lists with install counts and ratings
- Trending apps and market leaders

**Use Cases:** Market trends, category analysis, competitive benchmarking, app discovery

---

### [Similar Methods](SIMILAR_METHODS.md)
Find apps similar to a reference app for competitive analysis and market research.

**Key Features:**
- Discover competitor apps
- Find similar/related apps
- Get titles, developers, ratings, pricing
- Competitive analysis and positioning

**Use Cases:** Competitive analysis, market research, app discovery, positioning strategy

---

### [Suggest Methods](SUGGEST_METHODS.md)
Get search suggestions and autocomplete from Google Play Store for keyword discovery and ASO.

**Key Features:**
- Autocomplete suggestions
- Popular search terms
- Nested keyword discovery
- Multi-language support

**Use Cases:** Keyword research, ASO optimization, content strategy, market insights

---

## 🚀 Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# App Methods
scraper.app_print_all("com.whatsapp")

# Search Methods
scraper.search_print_all("fitness tracker", count=20)

# Reviews Methods
scraper.reviews_print_all("com.whatsapp", count=100, sort="NEWEST")

# Developer Methods
scraper.developer_print_all("5700313618786177705", count=50)

# List Methods
scraper.list_print_all("TOP_FREE", "GAME", count=50)

# Similar Methods
scraper.similar_print_all("com.whatsapp", count=30)

# Suggest Methods
scraper.suggest_print_all("photo editor", count=10)
```

## 📖 Method Pattern

Each method type follows the same pattern with 6 functions:

- **`analyze()`** - Get all data as dictionary/list
- **`get_field()`** - Get single field value
- **`get_fields()`** - Get multiple fields as dictionary
- **`print_field()`** - Print single field to console
- **`print_fields()`** - Print multiple fields to console
- **`print_all()`** - Print all data as formatted JSON

## 🌍 Multi-Language & Multi-Region

All methods support multi-language and multi-region parameters:

```python
# Get data in Spanish from Spain
scraper.app_analyze("com.whatsapp", lang="es", country="es")

# Get data in Japanese from Japan
scraper.search_analyze("game", count=20, lang="ja", country="jp")

# Get data in French from France
scraper.reviews_analyze("com.whatsapp", count=50, lang="fr", country="fr")
```

**Supported:**
- **Languages:** 100+ (en, es, fr, de, ja, ko, zh, ar, pt, ru, etc.)
- **Countries:** 150+ (us, gb, ca, au, in, br, jp, kr, de, fr, etc.)

## 🔧 HTTP Clients

All methods support 7 HTTP clients with automatic fallback:

```python
# Default (requests)
scraper = GPlayScraper()

# Specify client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

**Available Clients:**
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - Browser impersonation with TLS fingerprinting
3. **tls_client** - Custom TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass capabilities
7. **aiohttp** - Async HTTP client

## 📊 What Can You Scrape?

### App Data (65+ Fields)
- Basic: title, developer, description, category, genre
- Ratings: score, ratings count, histogram
- Installs: install count ranges, statistics
- Pricing: free/paid, price, in-app purchases
- Media: icon, screenshots, video, header image
- Technical: version, size, Android version, dates
- Content: age rating, privacy policy, contact info
- Features: permissions, what's new, website

### Search & Discovery
- Search apps by keyword
- Get search suggestions
- Find similar/competitor apps
- Access top charts by category

### Developer Intelligence
- Complete app portfolio
- Performance tracking
- Market presence analysis

### User Reviews
- Reviews with ratings and text
- Timestamps and app versions
- Reviewer names and votes
- Filter by sort options

### Market Research
- Multi-language support (100+ languages)
- Multi-region data (150+ countries)
- Localized pricing and availability
- Competitive analysis

## 🎯 Use Cases

**Market Research**
- Analyze competitor apps
- Track market trends
- Identify opportunities
- Benchmark performance

**App Development**
- Monitor user feedback
- Track app performance
- Analyze competitors
- Optimize app store presence

**Data Analysis**
- Collect app data for research
- Sentiment analysis from reviews
- Market intelligence reports
- Machine learning datasets

**Business Intelligence**
- Competitive monitoring
- Market positioning
- Trend analysis
- Strategic planning

## 📄 License

This project is licensed under the MIT License.

---

**For detailed documentation on each method type, click the links above.**


================================================
FILE: README/REVIEWS_METHODS.md
================================================
# Reviews Methods

Extract user reviews from Google Play Store apps with ratings, content, and metadata.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get reviews
reviews = scraper.reviews_analyze("com.whatsapp", count=100, sort="NEWEST")
for review in reviews[:5]:
    print(f"{review['userName']}: {review['score']}★")
    print(f"  {review['content'][:100]}...")

# Get specific fields
scores = scraper.reviews_get_field("com.whatsapp", "score", count=100)
print(f"Average: {sum(scores)/len(scores):.2f}★")

# Get multiple fields
reviews = scraper.reviews_get_fields("com.whatsapp", ["userName", "score", "content"], count=50)
print(reviews)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `reviews_analyze(app_id, count=100, lang='en', country='us', sort='NEWEST')`
Returns reviews as a list of dictionaries.

```python
reviews = scraper.reviews_analyze("com.whatsapp", count=100, sort="NEWEST")
# Returns: [{'reviewId': '...', 'userName': '...', 'score': 5, 'content': '...', ...}, ...]
```

### `reviews_get_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')`
Returns a specific field from all reviews.

```python
scores = scraper.reviews_get_field("com.whatsapp", "score", count=100)
# Returns: [5, 4, 5, 3, 4, ...]
```

### `reviews_get_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')`
Returns multiple fields from all reviews.

```python
reviews = scraper.reviews_get_fields("com.whatsapp", ["userName", "score", "content"], count=50)
# Returns: [{'userName': 'John', 'score': 5, 'content': 'Great app!'}, ...]
```

### `reviews_print_field(app_id, field, count=100, lang='en', country='us', sort='NEWEST')`
Prints a specific field from all reviews.

```python
scraper.reviews_print_field("com.whatsapp", "content", count=20)
# Output:
# 1. content: Great app!
# 2. content: Love it
# 3. content: Needs improvement
```

### `reviews_print_fields(app_id, fields, count=100, lang='en', country='us', sort='NEWEST')`
Prints multiple fields from all reviews.

```python
scraper.reviews_print_fields("com.whatsapp", ["userName", "score"], count=20)
# Output:
# userName: John, score: 5
# userName: Jane, score: 4
```

### `reviews_print_all(app_id, count=100, lang='en', country='us', sort='NEWEST')`
Prints all review data as formatted JSON.

```python
scraper.reviews_print_all("com.whatsapp", count=50)
# Output: Full JSON array with all reviews
```

---

## Available Fields

- `reviewId` - Unique review ID
- `userName` - Reviewer name
- `userImage` - Reviewer avatar URL
- `score` - Review rating (1-5 stars)
- `content` - Review text/comment
- `thumbsUpCount` - Number of helpful votes
- `appVersion` - App version reviewed
- `at` - Review timestamp (ISO 8601 format)

---

## Sort Options

- **`NEWEST`** (default) - Most recent reviews first
- **`RELEVANT`** - Most relevant/helpful reviews
- **`RATING`** - Sorted by rating (highest/lowest)

---

## Practical Examples

### Sentiment Analysis
```python
reviews = scraper.reviews_get_fields("com.whatsapp", ["score", "content"], count=200)

# Rating distribution
rating_dist = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0}
for review in reviews:
    rating_dist[review['score']] += 1

print("Rating Distribution:")
for rating, count in rating_dist.items():
    print(f"{rating}★: {'█' * count} ({count})")

# Average rating
avg = sum(r['score'] for r in reviews) / len(reviews)
print(f"\nAverage: {avg:.2f}★")
```

### Find Common Issues
```python
reviews = scraper.reviews_get_fields("com.whatsapp", ["score", "content"], count=100, sort="RATING")

# Get low-rated reviews
low_rated = [r for r in reviews if r['score'] <= 2]

print(f"Found {len(low_rated)} low-rated reviews:")
for review in low_rated[:10]:
    print(f"- {review['content'][:100]}...")
```

### Track Review Trends
```python
from datetime import datetime

reviews = scraper.reviews_get_fields("com.whatsapp", ["at", "score"], count=500, sort="NEWEST")

# Group by month
monthly_scores = {}
for review in reviews:
    date = datetime.fromisoformat(review['at'])
    month_key = date.strftime("%Y-%m")
    
    if month_key not in monthly_scores:
        monthly_scores[month_key] = []
    monthly_scores[month_key].append(review['score'])

# Calculate monthly averages
for month, scores in sorted(monthly_scores.items()):
    avg = sum(scores) / len(scores)
    print(f"{month}: {avg:.2f}★ ({len(scores)} reviews)")
```

### Compare App Versions
```python
reviews = scraper.reviews_get_fields("com.whatsapp", ["appVersion", "score"], count=300)

# Group by version
version_scores = {}
for review in reviews:
    version = review['appVersion'] or "Unknown"
    if version not in version_scores:
        version_scores[version] = []
    version_scores[version].append(review['score'])

# Show version ratings
for version, scores in sorted(version_scores.items()):
    if len(scores) >= 5:  # Only versions with 5+ reviews
        avg = sum(scores) / len(scores)
        print(f"v{version}: {avg:.2f}★ ({len(scores)} reviews)")
```

### Export Reviews to CSV
```python
import csv

reviews = scraper.reviews_analyze("com.whatsapp", count=500)

with open('reviews.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.DictWriter(f, fieldnames=['userName', 'score', 'content', 'at', 'appVersion'])
    writer.writeheader()
    
    for review in reviews:
        writer.writerow({
            'userName': review['userName'],
            'score': review['score'],
            'content': review['content'],
            'at': review['at'],
            'appVersion': review['appVersion']
        })

print(f"Exported {len(reviews)} reviews to reviews.csv")
```

### Identify Top Reviewers
```python
reviews = scraper.reviews_get_fields("com.whatsapp", ["userName", "thumbsUpCount"], count=200)

# Sort by helpful votes
top_reviewers = sorted(reviews, key=lambda x: x['thumbsUpCount'] or 0, reverse=True)[:10]

print("Top 10 Most Helpful Reviewers:")
for i, review in enumerate(top_reviewers, 1):
    print(f"{i}. {review['userName']}: {review['thumbsUpCount']} helpful votes")
```

### Monitor Recent Feedback
```python
import time
from datetime import datetime

def monitor_reviews(app_id, interval=3600):
    """Check for new reviews every hour"""
    last_check = datetime.now()
    
    while True:
        reviews = scraper.reviews_get_fields(app_id, ["at", "score", "content"], count=50, sort="NEWEST")
        
        new_reviews = [r for r in reviews if datetime.fromisoformat(r['at']) > last_check]
        
        if new_reviews:
            print(f"\n{len(new_reviews)} new reviews:")
            for review in new_reviews:
                print(f"- {review['score']}★: {review['content'][:80]}...")
        
        last_check = datetime.now()
        time.sleep(interval)

# Run monitor (Ctrl+C to stop)
# monitor_reviews("com.whatsapp")
```

### Keyword Analysis
```python
from collections import Counter
import re

reviews = scraper.reviews_get_field("com.whatsapp", "content", count=500)

# Extract words
words = []
for content in reviews:
    if content:
        words.extend(re.findall(r'\b\w+\b', content.lower()))

# Remove common words
stop_words = {'the', 'a', 'an', 'and', 'or', 'but', 'is', 'are', 'was', 'were', 'in', 'on', 'at', 'to', 'for'}
filtered_words = [w for w in words if w not in stop_words and len(w) > 3]

# Top keywords
top_keywords = Counter(filtered_words).most_common(20)
print("Top Keywords in Reviews:")
for word, count in top_keywords:
    print(f"{word}: {count}")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `app_id` (str, required) - App package name
- `count` (int, optional) - Maximum number of reviews to return (default: 100)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `sort` (str, optional) - Sort order: "NEWEST", "RELEVANT", "RATING" (default: "NEWEST")
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`reviews_analyze()`** - Need complete review data for analysis
- **`reviews_get_field()`** - Need just one field (e.g., all scores)
- **`reviews_get_fields()`** - Need specific fields (more efficient)
- **`reviews_print_field()`** - Quick debugging/console output
- **`reviews_print_fields()`** - Quick debugging of multiple fields
- **`reviews_print_all()`** - Explore available data structure

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Batch Fetching
Reviews are fetched in batches of 50. The library automatically handles pagination.

```python
# Fetch 500 reviews (10 batches of 50)
reviews = scraper.reviews_analyze("com.whatsapp", count=500)
print(f"Fetched {len(reviews)} reviews")
```

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    reviews = scraper.reviews_analyze("invalid.app.id")
except AppNotFoundError:
    print("App not found")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Reviews
```python
# Get reviews from different regions
us_reviews = scraper.reviews_analyze("com.whatsapp", country="us", count=100)
uk_reviews = scraper.reviews_analyze("com.whatsapp", country="gb", count=100)
jp_reviews = scraper.reviews_analyze("com.whatsapp", country="jp", lang="ja", count=100)

print(f"US avg: {sum(r['score'] for r in us_reviews)/len(us_reviews):.2f}★")
print(f"UK avg: {sum(r['score'] for r in uk_reviews)/len(uk_reviews):.2f}★")
print(f"JP avg: {sum(r['score'] for r in jp_reviews)/len(jp_reviews):.2f}★")
```

### Sort Comparison
```python
# Compare different sort orders
newest = scraper.reviews_get_fields("com.whatsapp", ["score"], count=100, sort="NEWEST")
relevant = scraper.reviews_get_fields("com.whatsapp", ["score"], count=100, sort="RELEVANT")
rating = scraper.reviews_get_fields("com.whatsapp", ["score"], count=100, sort="RATING")

print(f"Newest avg: {sum(r['score'] for r in newest)/len(newest):.2f}★")
print(f"Relevant avg: {sum(r['score'] for r in relevant)/len(relevant):.2f}★")
print(f"Rating avg: {sum(r['score'] for r in rating)/len(rating):.2f}★")
```


================================================
FILE: README/SEARCH_METHODS.md
================================================
# Search Methods

Search for apps on Google Play Store by keyword, app name, or category.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Search for apps
results = scraper.search_analyze("social media", count=20)
for app in results:
    print(f"{app['title']}: {app['score']}★ by {app['developer']}")

# Get specific fields
titles = scraper.search_get_field("fitness tracker", "title")
print(titles)

# Get multiple fields
apps = scraper.search_get_fields("photo editor", ["title", "score", "free"])
print(apps)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `search_analyze(query, count=100, lang='en', country='us')`
Returns search results as a list of dictionaries.

```python
results = scraper.search_analyze("social media", count=20)
# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]
```

### `search_get_field(query, field, count=100, lang='en', country='us')`
Returns a specific field from all search results.

```python
titles = scraper.search_get_field("fitness tracker", "title")
# Returns: ['App 1', 'App 2', 'App 3', ...]
```

### `search_get_fields(query, fields, count=100, lang='en', country='us')`
Returns multiple fields from all search results.

```python
apps = scraper.search_get_fields("photo editor", ["title", "score", "free"])
# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]
```

### `search_print_field(query, field, count=100, lang='en', country='us')`
Prints a specific field from all search results.

```python
scraper.search_print_field("social media", "title", count=10)
# Output:
# 0. title: App 1
# 1. title: App 2
# 2. title: App 3
```

### `search_print_fields(query, fields, count=100, lang='en', country='us')`
Prints multiple fields from all search results.

```python
scraper.search_print_fields("social media", ["title", "score"], count=10)
# Output:
# 0. title: App 1, score: 4.5
# 1. title: App 2, score: 4.2
```

### `search_print_all(query, count=100, lang='en', country='us')`
Prints all data for all search results as formatted JSON.

```python
scraper.search_print_all("social media", count=20)
# Output: Full JSON array with all search results
```

---

## Available Fields

- `appId` - App package name (e.g., "com.example.app")
- `title` - App name
- `description` - App description/summary
- `icon` - App icon URL
- `url` - Play Store URL
- `developer` - Developer name
- `score` - Average rating (1-5)
- `scoreText` - Rating as text (e.g., "4.5")
- `currency` - Price currency (e.g., "USD")
- `price` - App price (0 if free)
- `free` - Boolean, true if free

---

## Practical Examples

### Find Top-Rated Apps
```python
results = scraper.search_get_fields("productivity", ["title", "score", "developer"], count=50)

# Filter high-rated apps
top_rated = [app for app in results if app['score'] and app['score'] >= 4.5]
top_rated.sort(key=lambda x: x['score'], reverse=True)

print("Top-Rated Productivity Apps:")
for i, app in enumerate(top_rated[:10], 1):
    print(f"{i}. {app['title']}: {app['score']}★ by {app['developer']}")
```

### Compare Free vs Paid Apps
```python
results = scraper.search_get_fields("photo editor", ["title", "free", "price", "score"], count=50)

free_apps = [app for app in results if app['free']]
paid_apps = [app for app in results if not app['free']]

free_avg = sum(app['score'] or 0 for app in free_apps) / len(free_apps) if free_apps else 0
paid_avg = sum(app['score'] or 0 for app in paid_apps) / len(paid_apps) if paid_apps else 0

print(f"Free apps: {len(free_apps)} (avg: {free_avg:.2f}★)")
print(f"Paid apps: {len(paid_apps)} (avg: {paid_avg:.2f}★)")
```

### Market Research
```python
keywords = ["fitness", "meditation", "diet", "sleep tracker"]

for keyword in keywords:
    results = scraper.search_get_fields(keyword, ["title", "score"], count=10)
    avg_score = sum(app['score'] or 0 for app in results) / len(results)
    print(f"{keyword}: {len(results)} apps, avg {avg_score:.2f}★")
```

### Find Competitors
```python
query = "task manager"
results = scraper.search_get_fields(query, ["title", "developer", "score", "free"], count=30)

print(f"Competitors for '{query}':")
for i, app in enumerate(results[:15], 1):
    price = "Free" if app['free'] else f"${app.get('price', 'N/A')}"
    print(f"{i}. {app['title']} by {app['developer']} - {app['score']}★ ({price})")
```

### Export Search Results
```python
import json

query = "language learning"
results = scraper.search_analyze(query, count=100)

with open(f'search_{query.replace(" ", "_")}.json', 'w') as f:
    json.dump(results, f, indent=2)

print(f"Exported {len(results)} results for '{query}'")
```

### Multi-Keyword Search
```python
keywords = ["vpn", "proxy", "security"]
all_results = {}

for keyword in keywords:
    results = scraper.search_get_fields(keyword, ["appId", "title", "score"], count=20)
    all_results[keyword] = results
    print(f"{keyword}: {len(results)} apps found")

# Find apps appearing in multiple searches
app_ids = {}
for keyword, results in all_results.items():
    for app in results:
        app_id = app['appId']
        if app_id not in app_ids:
            app_ids[app_id] = {'title': app['title'], 'keywords': []}
        app_ids[app_id]['keywords'].append(keyword)

# Apps in multiple categories
multi_category = {aid: data for aid, data in app_ids.items() if len(data['keywords']) > 1}
print(f"\nApps in multiple categories: {len(multi_category)}")
for app_id, data in list(multi_category.items())[:5]:
    print(f"- {data['title']}: {', '.join(data['keywords'])}")
```

### Analyze Developer Presence
```python
from collections import Counter

query = "puzzle game"
results = scraper.search_get_field(query, "developer", count=100)

# Count apps per developer
developer_counts = Counter(results)
top_developers = developer_counts.most_common(10)

print(f"Top Developers in '{query}':")
for developer, count in top_developers:
    print(f"{developer}: {count} apps")
```

### Price Range Analysis
```python
query = "premium photo editor"
results = scraper.search_get_fields(query, ["title", "price", "free"], count=50)

paid_apps = [app for app in results if not app['free'] and app['price']]

if paid_apps:
    prices = [app['price'] for app in paid_apps]
    print(f"Price Analysis for '{query}':")
    print(f"  Min: ${min(prices):.2f}")
    print(f"  Max: ${max(prices):.2f}")
    print(f"  Avg: ${sum(prices)/len(prices):.2f}")
    print(f"  Total paid apps: {len(paid_apps)}")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `query` (str, required) - Search keyword or phrase
- `count` (int, optional) - Maximum number of results to return (default: 100)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Search Query Tips
- Use specific keywords: "fitness tracker" vs "fitness"
- Try app categories: "puzzle game", "photo editor"
- Search by functionality: "vpn", "password manager"
- Use brand names: "google", "microsoft"
- Combine terms: "free music player"

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`search_analyze()`** - Need complete data for all search results
- **`search_get_field()`** - Need just one field from all results
- **`search_get_fields()`** - Need specific fields from all results (more efficient)
- **`search_print_field()`** - Quick debugging/console output
- **`search_print_fields()`** - Quick debugging of multiple fields
- **`search_print_all()`** - Explore available data structure

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    results = scraper.search_analyze("")
except ValueError:
    print("Query cannot be empty")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Search
```python
# Search in different regions
us_results = scraper.search_analyze("vpn", country="us", count=20)
uk_results = scraper.search_analyze("vpn", country="gb", count=20)
jp_results = scraper.search_analyze("vpn", country="jp", lang="ja", count=20)

print(f"US: {len(us_results)} results")
print(f"UK: {len(uk_results)} results")
print(f"JP: {len(jp_results)} results")
```

### Pagination
```python
# Get more results
results_20 = scraper.search_analyze("game", count=20)
results_50 = scraper.search_analyze("game", count=50)
results_100 = scraper.search_analyze("game", count=100)

print(f"20 results: {len(results_20)}")
print(f"50 results: {len(results_50)}")
print(f"100 results: {len(results_100)}")
```

### Search Result Filtering
```python
results = scraper.search_analyze("music player", count=50)

# Filter by rating
high_rated = [app for app in results if app['score'] and app['score'] >= 4.0]

# Filter by price
free_apps = [app for app in results if app['free']]

# Filter by developer
google_apps = [app for app in results if 'google' in app['developer'].lower()]

print(f"High rated: {len(high_rated)}")
print(f"Free: {len(free_apps)}")
print(f"Google: {len(google_apps)}")
```


================================================
FILE: README/SIMILAR_METHODS.md
================================================
# Similar Methods

Find similar and related apps on Google Play Store based on a reference app.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get similar apps
similar = scraper.similar_analyze("com.whatsapp", count=20)
for app in similar:
    print(f"{app['title']}: {app['score']}★ by {app['developer']}")

# Get specific fields
titles = scraper.similar_get_field("com.whatsapp", "title")
print(titles)

# Get multiple fields
apps = scraper.similar_get_fields("com.whatsapp", ["title", "score", "free"])
print(apps)
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `similar_analyze(app_id, count=100, lang='en', country='us')`
Returns similar apps as a list of dictionaries.

```python
similar = scraper.similar_analyze("com.whatsapp", count=20)
# Returns: [{'appId': '...', 'title': '...', 'score': 4.5, ...}, ...]
```

### `similar_get_field(app_id, field, count=100, lang='en', country='us')`
Returns a specific field from all similar apps.

```python
titles = scraper.similar_get_field("com.whatsapp", "title")
# Returns: ['App 1', 'App 2', 'App 3', ...]
```

### `similar_get_fields(app_id, fields, count=100, lang='en', country='us')`
Returns multiple fields from all similar apps.

```python
apps = scraper.similar_get_fields("com.whatsapp", ["title", "score", "free"])
# Returns: [{'title': 'App 1', 'score': 4.5, 'free': True}, ...]
```

### `similar_print_field(app_id, field, count=100, lang='en', country='us')`
Prints a specific field from all similar apps.

```python
scraper.similar_print_field("com.whatsapp", "title", count=10)
# Output:
# 1. title: App 1
# 2. title: App 2
# 3. title: App 3
```

### `similar_print_fields(app_id, fields, count=100, lang='en', country='us')`
Prints multiple fields from all similar apps.

```python
scraper.similar_print_fields("com.whatsapp", ["title", "score"], count=10)
# Output:
# 1. title: App 1, score: 4.5
# 2. title: App 2, score: 4.2
```

### `similar_print_all(app_id, count=100, lang='en', country='us')`
Prints all data for all similar apps as formatted JSON.

```python
scraper.similar_print_all("com.whatsapp", count=20)
# Output: Full JSON array with all similar apps
```

---

## Available Fields

- `appId` - App package name (e.g., "com.example.app")
- `title` - App name
- `description` - App description
- `icon` - App icon URL
- `url` - Play Store URL
- `developer` - Developer name
- `score` - Average rating (1-5)
- `scoreText` - Rating as text (e.g., "4.5")
- `currency` - Price currency (e.g., "USD")
- `price` - App price (0 if free)
- `free` - Boolean, true if free

---

## Practical Examples

### Competitive Analysis
```python
app_id = "com.whatsapp"
similar = scraper.similar_get_fields(app_id, ["title", "score", "developer"], count=30)

print(f"Competitors of {app_id}:")
for i, app in enumerate(similar[:10], 1):
    print(f"{i}. {app['title']}: {app['score']}★ by {app['developer']}")

# Calculate average competitor rating
avg_score = sum(app['score'] or 0 for app in similar) / len(similar)
print(f"\nAverage competitor rating: {avg_score:.2f}★")
```

### Find Better Alternatives
```python
app_id = "com.example.app"
my_app = scraper.app_get_field(app_id, "score")
similar = scraper.similar_get_fields(app_id, ["title", "score", "url"], count=50)

# Find apps with higher ratings
better_apps = [app for app in similar if app['score'] and app['score'] > my_app]
better_apps.sort(key=lambda x: x['score'], reverse=True)

print(f"Apps better than {app_id} ({my_app}★):")
for app in better_apps[:10]:
    print(f"- {app['title']}: {app['score']}★")
```

### Market Positioning
```python
app_id = "com.whatsapp"
similar = scraper.similar_get_fields(app_id, ["title", "free", "price", "score"], count=50)

free_apps = [app for app in similar if app['free']]
paid_apps = [app for app in similar if not app['free']]

print(f"Market Analysis for {app_id}:")
print(f"  Free competitors: {len(free_apps)}")
print(f"  Paid competitors: {len(paid_apps)}")
if free_apps:
    print(f"  Free avg rating: {sum(a['score'] or 0 for a in free_apps)/len(free_apps):.2f}★")
if paid_apps:
    print(f"  Paid avg rating: {sum(a['score'] or 0 for a in paid_apps)/len(paid_apps):.2f}★")
```

### Developer Overlap Analysis
```python
from collections import Counter

app_id = "com.whatsapp"
similar = scraper.similar_get_field(app_id, "developer", count=50)

# Count apps per developer
developer_counts = Counter(similar)
top_developers = developer_counts.most_common(5)

print(f"Top developers in similar apps to {app_id}:")
for developer, count in top_developers:
    print(f"{developer}: {count} apps")
```

### Export Similar Apps
```python
import json

app_id = "com.whatsapp"
similar = scraper.similar_analyze(app_id, count=50)

with open(f'similar_to_{app_id}.json', 'w') as f:
    json.dump(similar, f, indent=2)

print(f"Exported {len(similar)} similar apps to similar_to_{app_id}.json")
```

### Compare Multiple Apps
```python
apps_to_compare = ["com.whatsapp", "com.telegram", "com.viber"]
all_similar = {}

for app_id in apps_to_compare:
    similar = scraper.similar_get_fields(app_id, ["appId", "title"], count=20)
    all_similar[app_id] = [app['appId'] for app in similar]
    print(f"{app_id}: {len(similar)} similar apps")

# Find common competitors
common = set(all_similar[apps_to_compare[0]])
for app_id in apps_to_compare[1:]:
    common &= set(all_similar[app_id])

print(f"\nCommon competitors: {len(common)}")
for app_id in list(common)[:5]:
    title = scraper.app_get_field(app_id, "title")
    print(f"- {title}")
```

### Feature Gap Analysis
```python
app_id = "com.whatsapp"
similar = scraper.similar_get_fields(app_id, ["title", "score"], count=30)

# Get top-rated competitors
top_competitors = sorted(similar, key=lambda x: x['score'] or 0, reverse=True)[:5]

print(f"Top-rated competitors of {app_id}:")
for i, app in enumerate(top_competitors, 1):
    print(f"{i}. {app['title']}: {app['score']}★")
    # You can then analyze these apps individually for features
```

### Price Comparison
```python
app_id = "com.example.paidapp"
my_price = scraper.app_get_field(app_id, "price")
similar = scraper.similar_get_fields(app_id, ["title", "price", "free"], count=50)

paid_similar = [app for app in similar if not app['free'] and app['price']]

if paid_similar:
    prices = [app['price'] for app in paid_similar]
    print(f"Price Comparison:")
    print(f"  Your app: ${my_price:.2f}")
    print(f"  Competitor min: ${min(prices):.2f}")
    print(f"  Competitor max: ${max(prices):.2f}")
    print(f"  Competitor avg: ${sum(prices)/len(prices):.2f}")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `app_id` (str, required) - App package name to find similar apps for
- `count` (int, optional) - Maximum number of similar apps to return (default: 100)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')
- `field` (str) - Single field name
- `fields` (List[str]) - List of field names

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`similar_analyze()`** - Need complete data for all similar apps
- **`similar_get_field()`** - Need just one field from all similar apps
- **`similar_get_fields()`** - Need specific fields from all similar apps (more efficient)
- **`similar_print_field()`** - Quick debugging/console output
- **`similar_print_fields()`** - Quick debugging of multiple fields
- **`similar_print_all()`** - Explore available data structure

---

## Use Cases

### Competitive Intelligence
- Identify direct competitors
- Monitor competitor ratings and pricing
- Track market positioning
- Discover new entrants in your category

### Market Research
- Understand market landscape
- Analyze pricing strategies
- Identify market gaps
- Study successful competitors

### Product Development
- Find feature inspiration
- Identify differentiation opportunities
- Benchmark against competitors
- Discover user expectations

### Marketing Strategy
- Identify target audience overlap
- Study competitor positioning
- Find partnership opportunities
- Analyze market trends

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, AppNotFoundError, NetworkError

scraper = GPlayScraper()

try:
    similar = scraper.similar_analyze("invalid.app.id")
except AppNotFoundError:
    print("App not found or no similar apps available")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Similar Apps
```python
# Get similar apps from different regions
us_similar = scraper.similar_analyze("com.whatsapp", country="us", count=20)
uk_similar = scraper.similar_analyze("com.whatsapp", country="gb", count=20)
jp_similar = scraper.similar_analyze("com.whatsapp", country="jp", lang="ja", count=20)

print(f"US similar apps: {len(us_similar)}")
print(f"UK similar apps: {len(uk_similar)}")
print(f"JP similar apps: {len(jp_similar)}")
```

### Filtering Results
```python
similar = scraper.similar_analyze("com.whatsapp", count=50)

# Filter by rating
high_rated = [app for app in similar if app['score'] and app['score'] >= 4.0]

# Filter by price
free_apps = [app for app in similar if app['free']]

# Filter by developer
exclude_dev = [app for app in similar if app['developer'] != "WhatsApp LLC"]

print(f"High rated: {len(high_rated)}")
print(f"Free: {len(free_apps)}")
print(f"Other developers: {len(exclude_dev)}")
```

### Batch Analysis
```python
# Analyze similar apps for multiple apps
apps = ["com.whatsapp", "com.telegram", "com.viber"]
results = {}

for app_id in apps:
    similar = scraper.similar_get_fields(app_id, ["title", "score"], count=10)
    results[app_id] = similar
    print(f"{app_id}: {len(similar)} similar apps found")
```


================================================
FILE: README/SUGGEST_METHODS.md
================================================
# Suggest Methods

Get search suggestions and autocomplete from Google Play Store for keyword discovery and ASO.

## Quick Start

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper()

# Get suggestions
suggestions = scraper.suggest_analyze("video", count=5)
print(suggestions)
# ['video player', 'video editor', 'video downloader', 'video maker', 'video call']

# Get nested suggestions
nested = scraper.suggest_nested("video", count=3)
for term, suggestions in nested.items():
    print(f"{term}: {suggestions}")
```

---

## HTTP Clients

The library supports 7 HTTP clients with automatic fallback. If one fails, it tries the next.

### Supported Clients
1. **requests** (default) - Standard Python HTTP library
2. **curl_cffi** - cURL with browser impersonation
3. **tls_client** - Advanced TLS fingerprinting
4. **httpx** - Modern async-capable HTTP client
5. **urllib3** - Low-level HTTP client
6. **cloudscraper** - Cloudflare bypass
7. **aiohttp** - Async HTTP client

### Usage

```python
# Default (tries requests first, then others)
scraper = GPlayScraper()

# Specify a client
scraper = GPlayScraper(http_client="curl_cffi")
scraper = GPlayScraper(http_client="tls_client")
scraper = GPlayScraper(http_client="httpx")
```

### Installation

```bash
# Default
pip install requests

# Advanced clients (optional)
pip install curl-cffi
pip install tls-client
pip install httpx
pip install urllib3
pip install cloudscraper
pip install aiohttp
```

**Note:** The library automatically falls back to available clients if your preferred one fails.

---

## Methods

### `suggest_analyze(term, count=5, lang='en', country='us')`
Returns search suggestions as a list of strings.

```python
suggestions = scraper.suggest_analyze("video", count=5)
# Returns: ['video player', 'video editor', 'video downloader', 'video maker', 'video call']
```

### `suggest_nested(term, count=5, lang='en', country='us')`
Returns nested suggestions (suggestions for each suggestion).

```python
nested = scraper.suggest_nested("video", count=3)
# Returns: {
#   'video player': ['video player hd', 'video player all format', 'video player pro'],
#   'video editor': ['video editor pro', 'video editor free', 'video editor app'],
#   'video downloader': ['video downloader for facebook', 'video downloader hd', ...]
# }
```

### `suggest_print_all(term, count=5, lang='en', country='us')`
Prints suggestions as formatted JSON.

```python
scraper.suggest_print_all("video", count=5)
# Output: ["video player", "video editor", "video downloader", "video maker", "video call"]
```

### `suggest_print_nested(term, count=5, lang='en', country='us')`
Prints nested suggestions as formatted JSON.

```python
scraper.suggest_print_nested("video", count=3)
# Output: Full JSON object with nested suggestions
```

---

## Return Formats

### Simple Suggestions (List)
```python
['video player', 'video editor', 'video downloader', 'video maker', 'video call']
```

### Nested Suggestions (Dictionary)
```python
{
  'video player': ['video player hd', 'video player all format', 'video player pro'],
  'video editor': ['video editor pro', 'video editor free', 'video editor app'],
  'video downloader': ['video downloader for facebook', 'video downloader hd']
}
```

---

## Practical Examples

### Autocomplete Feature
```python
def autocomplete(user_input):
    """Provide autocomplete suggestions as user types"""
    if len(user_input) < 2:
        return []
    
    suggestions = scraper.suggest_analyze(user_input, count=10)
    return suggestions

# Usage
print(autocomplete("gam"))  # ['game', 'games', 'gaming', ...]
print(autocomplete("photo"))  # ['photo editor', 'photo collage', ...]
```

### Keyword Research
```python
base_keywords = ["fitness", "workout", "exercise"]
all_keywords = set()

for keyword in base_keywords:
    suggestions = scraper.suggest_analyze(keyword, count=10)
    all_keywords.update(suggestions)
    print(f"{keyword}: {len(suggestions)} suggestions")

print(f"\nTotal unique keywords: {len(all_keywords)}")
print("Sample keywords:", list(all_keywords)[:10])
```

### Deep Keyword Mining
```python
term = "photo editor"
nested = scraper.suggest_nested(term, count=5)

print(f"Keyword tree for '{term}':")
for parent, children in nested.items():
    print(f"\n{parent}:")
    for child in children:
        print(f"  - {child}")
```

### ASO Keyword Discovery
```python
import json

def discover_keywords(seed_term, depth=2):
    """Discover keywords with specified depth"""
    keywords = {}
    
    # Level 1
    level1 = scraper.suggest_analyze(seed_term, count=10)
    keywords[seed_term] = level1
    
    if depth > 1:
        # Level 2
        for term in level1[:5]:  # Limit to avoid too many requests
            level2 = scraper.suggest_analyze(term, count=5)
            keywords[term] = level2
    
    return keywords

keywords = discover_keywords("game", depth=2)
print(json.dumps(keywords, indent=2))
```

### Trending Search Terms
```python
categories = ["game", "social", "productivity", "photo", "music"]
trending = {}

for category in categories:
    suggestions = scraper.suggest_analyze(category, count=5)
    trending[category] = suggestions
    print(f"{category}: {', '.join(suggestions[:3])}...")
```

### Long-Tail Keywords
```python
short_term = "vpn"
suggestions = scraper.suggest_analyze(short_term, count=10)

# Filter for long-tail (3+ words)
long_tail = [s for s in suggestions if len(s.split()) >= 3]

print(f"Long-tail keywords for '{short_term}':")
for keyword in long_tail:
    print(f"- {keyword}")
```

### Competitor Keyword Analysis
```python
competitor_apps = ["whatsapp", "telegram", "signal"]
all_suggestions = {}

for app in competitor_apps:
    suggestions = scraper.suggest_analyze(app, count=10)
    all_suggestions[app] = suggestions
    print(f"{app}: {len(suggestions)} suggestions")

# Find common keywords
common = set(all_suggestions[competitor_apps[0]])
for app in competitor_apps[1:]:
    common &= set(all_suggestions[app])

print(f"\nCommon keywords: {common}")
```

### Export Keyword Map
```python
import json

term = "fitness"
nested = scraper.suggest_nested(term, count=10)

with open(f'keywords_{term}.json', 'w') as f:
    json.dump(nested, f, indent=2)

print(f"Exported keyword map for '{term}'")
print(f"Total parent keywords: {len(nested)}")
print(f"Total child keywords: {sum(len(v) for v in nested.values())}")
```

### Search Volume Estimation
```python
term = "photo editor"
suggestions = scraper.suggest_analyze(term, count=20)

# Suggestions appear in order of popularity (roughly)
print(f"Top suggestions for '{term}' (by estimated popularity):")
for i, suggestion in enumerate(suggestions[:10], 1):
    print(f"{i}. {suggestion}")
```

---

## Parameters

### Initialization
- `http_client` (str, optional) - HTTP client to use: "requests", "curl_cffi", "tls_client", "httpx", "urllib3", "cloudscraper", "aiohttp" (default: "requests")

### Method Parameters
- `term` (str, required) - Search term or keyword
- `count` (int, optional) - Number of suggestions to return (default: 5, max: ~10)
- `lang` (str, optional) - Language code (default: 'en')
- `country` (str, optional) - Country code (default: 'us')

### Search Term Tips
- Use partial words: "gam" → "game", "games", "gaming"
- Try categories: "fitness", "photo", "music"
- Test variations: "vpn", "vpn free", "vpn app"
- Use brand names: "whatsapp", "instagram"
- Combine terms: "photo editor free"

### Language & Country Codes
- **Language**: 'en', 'es', 'fr', 'de', 'ja', 'ko', 'pt', 'ru', 'zh', etc.
- **Country**: 'us', 'gb', 'ca', 'au', 'in', 'br', 'jp', 'kr', 'de', 'fr', etc.

---

## When to Use Each Method

- **`suggest_analyze()`** - Get simple list of suggestions for autocomplete or keyword research
- **`suggest_nested()`** - Deep keyword mining with two levels of suggestions
- **`suggest_print_all()`** - Quick debugging/console output of suggestions
- **`suggest_print_nested()`** - Quick debugging/console output of nested suggestions

---

## Use Cases

### App Store Optimization (ASO)
- Discover high-traffic keywords
- Find long-tail keyword opportunities
- Analyze competitor keywords
- Optimize app title and description

### Market Research
- Identify trending search terms
- Understand user search behavior
- Discover niche markets
- Track keyword trends over time

### Content Strategy
- Generate content ideas
- Find related topics
- Optimize metadata
- Improve discoverability

### Competitive Analysis
- Discover competitor keywords
- Find keyword gaps
- Identify market opportunities
- Track competitor positioning

---

## Advanced Features

### Rate Limiting
Built-in rate limiting (1 second delay between requests) prevents blocking.

### Error Handling
```python
from gplay_scraper import GPlayScraper, NetworkError

scraper = GPlayScraper()

try:
    suggestions = scraper.suggest_analyze("")
except ValueError:
    print("Term cannot be empty")
except NetworkError:
    print("Network error occurred")
```

### Multi-Region Suggestions
```python
# Get suggestions from different regions
us_suggestions = scraper.suggest_analyze("game", country="us")
uk_suggestions = scraper.suggest_analyze("game", country="gb")
jp_suggestions = scraper.suggest_analyze("game", country="jp", lang="ja")

print(f"US: {us_suggestions[:3]}")
print(f"UK: {uk_suggestions[:3]}")
print(f"JP: {jp_suggestions[:3]}")
```

### Batch Processing
```python
terms = ["fitness", "diet", "workout", "yoga", "meditation"]
all_suggestions = {}

for term in terms:
    suggestions = scraper.suggest_analyze(term, count=10)
    all_suggestions[term] = suggestions
    print(f"{term}: {len(suggestions)} suggestions")

# Find overlapping keywords
all_keywords = set()
for suggestions in all_suggestions.values():
    all_keywords.update(suggestions)

print(f"\nTotal unique keywords: {len(all_keywords)}")
```

### Recursive Keyword Expansion
```python
def expand_keywords(term, max_depth=2, current_depth=0):
    """Recursively expand keywords"""
    if current_depth >= max_depth:
        return []
    
    suggestions = scraper.suggest_analyze(term, count=5)
    all_keywords = suggestions.copy()
    
    if current_depth < max_depth - 1:
        for suggestion in suggestions[:2]:  # Limit to avoid explosion
            child_keywords = expand_keywords(suggestion, max_depth, current_depth + 1)
            all_keywords.extend(child_keywords)
    
    return all_keywords

keywords = expand_keywords("game", max_depth=2)
print(f"Expanded to {len(set(keywords))} unique keywords")
```

### Suggestion Filtering
```python
term = "game"
suggestions = scraper.suggest_analyze(term, count=20)

# Filter by length
short = [s for s in suggestions if len(s.split()) <= 2]
long = [s for s in suggestions if len(s.split()) > 2]

# Filter by keyword
free_games = [s for s in suggestions if 'free' in s.lower()]

print(f"Short keywords: {len(short)}")
print(f"Long keywords: {len(long)}")
print(f"Free games: {len(free_games)}")
```


================================================
FILE: README.md
================================================
# Google Play Scraper - Python Library 📱

[![PyPI version](https://badge.fury.io/py/gplay-scraper.svg)](https://badge.fury.io/py/gplay-scraper)
[![Python](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Documentation](https://img.shields.io/badge/docs-available-brightgreen.svg)](https://mohammedcha.github.io/gplay-scraper/)
[![Downloads](https://pepy.tech/badge/gplay-scraper)](https://pepy.tech/project/gplay-scraper)
[![GitHub stars](https://img.shields.io/github/stars/Mohammedcha/gplay-scraper.svg)](https://github.com/Mohammedcha/gplay-scraper/stargazers)
[![GitHub issues](https://img.shields.io/github/issues/Mohammedcha/gplay-scraper.svg)](https://github.com/Mohammedcha/gplay-scraper/issues)

<div align="center">
  <img src="https://github.com/Mohammedcha/gplay-scraper/blob/main/assets/gplay-scraper.png" alt="GPlay Scraper">
</div>

**GPlay Scraper** is a powerful Python library for extracting comprehensive data from the Google Play Store. Built for developers, data analysts, and researchers, it provides easy access to app information, user reviews, search results, top charts, and market intelligence—all without requiring API keys.

## 🐛 Found a Bug? Help Us Improve!

**We value your feedback!** If you encounter any bugs, errors, or have suggestions for improvements, please [open an issue](https://github.com/Mohammedcha/gplay-scraper/issues). Contributors who report bugs or suggest features will be acknowledged in our [Contributors section](#-contributors) 🙏

## 🎯 What Can You Scrape?

**App Data (65+ Fields)**
- Basic info: title, developer, description, category, genre
- Ratings & reviews: score, ratings count, histogram, user reviews
- Install metrics: install count ranges, download statistics
- Pricing: free/paid status, price, in-app purchases, currency
- Media: icon, screenshots, video, header image URLs
- Technical: version, size, Android version, release date, last update
- Content: age rating, privacy policy, developer contact info
- Features: permissions, what's new, developer website

**Search & Discovery**
- Search apps by keyword with filtering and pagination
- Get search suggestions and autocomplete terms
- Find similar/competitor apps for any app
- Access top charts (free, paid, grossing) across 54 categories

**Developer Intelligence**
- Get complete app portfolio for any developer
- Track developer's app performance and ratings
- Analyze developer's market presence

**User Reviews**
- Extract reviews with ratings, text, and timestamps
- Get reviewer names and helpful vote counts
- Filter by newest, most relevant, or highest rated
- Track app versions mentioned in reviews

**Market Research**
- Multi-language support (100+ languages)
- Multi-region data (150+ countries)
- Localized pricing and availability
- Competitive analysis and benchmarking

## 🆕 **What's New in v1.0.6** 

**✅ Critical Bug Fixes:**
- **Reviews Pagination Fix** - Fixed critical issue when requesting more reviews than available
- **NoneType Error Resolution** - Resolved 'NoneType' object is not subscriptable error in reviews
- **Empty Response Handling** - Better handling of apps with limited reviews
- **Token Extraction Logic** - Improved pagination token handling for empty responses
- **Graceful Degradation** - Now returns available reviews instead of crashing

**✅ Enhanced Reliability:**
- **Safe Bounds Checking** - Added proper bounds checking for pagination tokens
- **Null Checking** - Enhanced null checking for empty data structures
- **Error Recovery** - Improved error handling in ReviewsScraper and ReviewsParser
- **Stability Improvements** - Better handling of edge cases in reviews extraction

**🙏 Acknowledgments:**
- Thanks to [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) for reporting the reviews pagination bug

**✅ 7 Method Types:**
- **App Methods** - Extract 65+ data fields from any app (ratings, installs, pricing, permissions, etc.)
- **Search Methods** - Search Google Play Store apps with comprehensive filtering
- **Reviews Methods** - Extract user reviews with ratings, timestamps, and detailed feedback
- **Developer Methods** - Get all apps published by a specific developer
- **List Methods** - Access top charts (top free, top paid, top grossing) by category
- **Similar Methods** - Find similar/competitor apps for market research
- **Suggest Methods** - Get search suggestions and autocomplete for ASO

## ⚡ Key Features

**Powerful & Flexible**
- **7 HTTP clients with automatic fallback** - requests, curl_cffi, tls_client, httpx, urllib3, cloudscraper, aiohttp
- **42 functions across 7 method types** - analyze(), get_field(), get_fields(), print_field(), print_fields(), print_all()
- **No API keys required** - Direct scraping from Google Play Store
- **Multi-language & multi-region** - 100+ languages, 150+ countries

**Reliable & Safe**
- **Built-in rate limiting** - Prevents blocking with automatic delays
- **Automatic HTTP client fallback** - Ensures maximum reliability
- **Error handling** - Graceful failures with informative messages
- **Retry logic** - Automatic retries for failed requests

**Developer Friendly**
- **Simple API** - Intuitive method names and parameters
- **Comprehensive documentation** - Examples for every use case
- **Type hints** - Full IDE autocomplete support
- **Flexible output** - Get data as dict/list or print as JSON

## 📋 Requirements

- Python 3.7+
- requests (default HTTP client)
- Optional: curl-cffi, tls-client, httpx, urllib3, cloudscraper, aiohttp (for advanced HTTP clients)

## 🚀 Installation

```bash
# Install from PyPI
pip install gplay-scraper

# Or install in development mode
pip install -e .
```

## 📖 Quick Start

```python
from gplay_scraper import GPlayScraper

# Initialize with HTTP client (curl_cffi recommended for best performance)
scraper = GPlayScraper(http_client="curl_cffi")

# Get app details with different image sizes
app_id = "com.whatsapp"
scraper.app_print_all(app_id, lang="en", country="us", assets="LARGE")

# Get high-quality app data
data = scraper.app_analyze(app_id, assets="ORIGINAL")  # Maximum image quality
icon_small = scraper.app_get_field(app_id, "icon", assets="SMALL")  # 512px icon

# Print specific fields with custom image sizes
scraper.app_print_field(app_id, "icon", assets="LARGE")  # Print large icon URL
scraper.app_print_fields(app_id, ["icon", "screenshots"], assets="ORIGINAL")  # Print multiple fields

# Search for apps
scraper.search_print_all("social media", count=10, lang="en", country="us")

# Get reviews
scraper.reviews_print_all(app_id, count=50, sort="NEWEST", lang="en", country="us")

# Get developer apps
scraper.developer_print_all("5700313618786177705", count=20, lang="en", country="us")

# Get top charts
scraper.list_print_all("TOP_FREE", "GAME", count=20, lang="en", country="us")

# Get similar apps
scraper.similar_print_all(app_id, count=30, lang="en", country="us")

# Get search suggestions
scraper.suggest_print_all("fitness", count=5, lang="en", country="us")
```

## 🎯 7 Method Types

GPlay Scraper provides 7 method types with 42 functions to interact with Google Play Store data:

### 1. [App Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md) - Extract app details (65+ fields)
### 2. [Search Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md) - Search for apps by keyword
### 3. [Reviews Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md) - Get user reviews and ratings
### 4. [Developer Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md) - Get all apps from a developer
### 5. [List Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md) - Get top charts (free, paid, grossing)
### 6. [Similar Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md) - Find similar/related apps
### 7. [Suggest Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md) - Get search suggestions/autocomplete

Each method type has 6 functions:
- `analyze()` - Get all data as dictionary/list
- `get_field()` - Get single field value
- `get_fields()` - Get multiple fields
- `print_field()` - Print single field to console
- `print_fields()` - Print multiple fields to console
- `print_all()` - Print all data as JSON

## 🎯 Method Examples

### 1. [App Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md) - Get App Details
Extract comprehensive information about any app including ratings, installs, pricing, and 65+ data fields.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/APP_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print all app data as JSON
scraper.app_print_all("com.whatsapp", lang="en", country="us")
```

**What you get:** Complete app profile with title, developer, ratings, install counts, pricing, screenshots, permissions, and more.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/app_example.json)**

---

### 2. [Search Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md) - Find Apps by Keyword
Search the Play Store by keyword, app name, or category to discover apps.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SEARCH_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print all search results as JSON
scraper.search_print_all("fitness tracker", count=20, lang="en", country="us")
```

**What you get:** List of apps matching your search with titles, developers, ratings, prices, and Play Store URLs.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/search_example.json)**

---

### 3. [Reviews Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md) - Extract User Reviews
Get user reviews with ratings, comments, timestamps, and helpful votes for sentiment analysis.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/REVIEWS_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print all reviews as JSON
scraper.reviews_print_all("com.whatsapp", count=100, sort="NEWEST", lang="en", country="us")
```

**What you get:** User reviews with names, ratings (1-5 stars), review text, timestamps, app versions, and helpful vote counts.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/reviews_example.json)**

---

### 4. [Developer Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md) - Get Developer's Apps
Retrieve all apps published by a specific developer using their developer ID.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/DEVELOPER_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print all developer apps as JSON
scraper.developer_print_all("5700313618786177705", count=50, lang="en", country="us")
```

**What you get:** Complete portfolio of apps from a developer with titles, ratings, prices, and descriptions.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/developer_example.json)**

---

### 5. [List Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md) - Get Top Charts
Access Play Store top charts including top free, top paid, and top grossing apps by category.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/LIST_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print top free games as JSON
scraper.list_print_all("TOP_FREE", "GAME", count=50, lang="en", country="us")
```

**What you get:** Top-ranked apps with titles, developers, ratings, install counts, prices, and screenshots.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/list_example.json)**

---

### 6. [Similar Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md) - Find Related Apps
Discover apps similar to a reference app for competitive analysis and market research.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SIMILAR_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print similar apps as JSON
scraper.similar_print_all("com.whatsapp", count=30, lang="en", country="us")
```

**What you get:** List of similar/competitor apps with titles, developers, ratings, and pricing information.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/similar_example.json)**

---

### 7. [Suggest Methods](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md) - Get Search Suggestions
Get autocomplete suggestions and keyword ideas for ASO and market research.

📖 **[View detailed documentation →](https://github.com/Mohammedcha/gplay-scraper/blob/main/README/SUGGEST_METHODS.md)**

```python
from gplay_scraper import GPlayScraper

scraper = GPlayScraper(http_client="curl_cffi")

# Print search suggestions as JSON
scraper.suggest_print_all("photo editor", count=10, lang="en", country="us")
```

**What you get:** List of popular search terms related to your keyword for ASO and keyword research.

📄 **[View JSON example →](https://github.com/Mohammedcha/gplay-scraper/blob/main/output/suggest_example.json)**

---

## 🤝 Contributing

1. Fork the repository
2. Create your feature branch
3. Make your changes
4. Test thoroughly
5. Submit a pull request

## 📄 License

This project is licensed under the MIT License.

## 🙏 Contributors

Special thanks to developers who helped improve this library:

- [@PhamDinhThienVu](https://github.com/PhamDinhThienVu) - Reported reviews pagination bug (v1.0.6)
- [@elmissouri16](https://github.com/elmissouri16) - Suggested multiple HTTP clients support (v1.0.3)

---

**Happy Analyzing! 🚀**

================================================
FILE: SECURITY.md
================================================
# Security Policy

## Supported Versions

| Version | Supported          |
| ------- | ------------------ |
| 1.0.x   | :white_check_mark: |

## Reporting a Vulnerability

If you discover a security vulnerability, please report it through GitHub Issues with the "security" label.

Please do not report security vulnerabilities through public GitHub issues.

We will respond to security reports within 48 hours.

================================================
FILE: build_docs.py
================================================
#!/usr/bin/env python3
"""Build documentation using Sphinx."""

import os
import sys
import subprocess
from pathlib import Path

def install_docs_requirements():
    """Install documentation requirements."""
    print("[INFO] Installing documentation requirements...")
    try:
        subprocess.run([
            sys.executable, "-m", "pip", "install", 
            "-r", "docs/requirements.txt"
        ], check=True)
        print("[OK] Documentation requirements installed!")
    except subprocess.CalledProcessError as e:
        print(f"[ERROR] Failed to install requirements: {e}")
        return False
    return True

def build_html_docs():
    """Build HTML documentation."""
    print("[INFO] Building HTML documentation...")
    
    docs_dir = Path("docs")
    build_dir = docs_dir / "_build" / "html"
    
    try:
        # Change to docs directory
        os.chdir(docs_dir)
        
        # Build documentation
        subprocess.run([
            "sphinx-build", "-b", "html", ".", "_build/html"
        ], check=True)
        
        print("[OK] Documentation built successfully!")
        print(f"[INFO] Open: {Path('_build/html/index.html').absolute()}")
        return True
        
    except subprocess.CalledProcessError as e:
        print(f"[ERROR] Failed to build documentation: {e}")
        return False
    except FileNotFoundError:
        print("[ERROR] Sphinx not found. Installing requirements first...")
        return False

def main():
    """Main function to build documentation."""
    print("=== GPlay Scraper Documentation Builder ===\n")
    
    # Install requirements
    if not install_docs_requirements():
        return 1
    
    # Build documentation
    if not build_html_docs():
        return 1
    
    print("\n[SUCCESS] Documentation build complete!")
    print("[INFO] Open docs/_build/html/index.html in your browser")
    return 0

if __name__ == "__main__":
    sys.exit(main())

================================================
FILE: docs/README.md
================================================
# GPlay Scraper Documentation

## Build Documentation

```bash
pip install -r requirements.txt
cd docs
sphinx-build -b html . _build/html
```

## Open Documentation

```bash
start _build/html/index.html  # Windows
open _build/html/index.html   # Mac
xdg-open _build/html/index.html  # Linux
```

## Live Reload

```bash
pip install sphinx-autobuild
sphinx-autobuild . _build/html
```


================================================
FILE: docs/api/app.rst
================================================
App Methods
===========

Extract comprehensive app data with 57 fields including install analytics, ratings, pricing, and developer information.

Overview
--------

The App methods provide access to detailed information about any Google Play Store app. All methods return data in JSON format with 57 fields.

Available Methods
-----------------

* ``app_analyze()`` - Get all 57 fields
* ``app_get_field()`` - Get single field value
* ``app_get_fields()`` - Get multiple field values
* ``app_print_field()`` - Print single field
* ``app_print_fields()`` - Print multiple fields
* ``app_print_all()`` - Print all data as JSON

app_analyze()
-------------

Get complete app data with all 57 fields.

**Signature:**

.. code-block:: python

   app_analyze(app_id, lang='en', country='', assets=None)

**Parameters:**

* ``app_id`` (str, required) - Google Play app ID (e.g., 'com.whatsapp')
* ``lang`` (str, optional) - Language code (default: 'en')
* ``country`` (str, optional) - Country code (default: '')
* ``assets`` (str, optional) - Image size: 'SMALL', 'MEDIUM', 'LARGE', 'ORIGINAL'

**Returns:**

Dictionary with 57 fields

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   app = scraper.app_analyze('com.whatsapp')
   
   print(app['title'])              # WhatsApp Messenger
   print(app['developer'])          # WhatsApp LLC
   print(app['score'])              # 4.2189474
   print(app['realInstalls'])       # 10931553905
   print(app['dailyInstalls'])      # 1815870
   print(app['publisherCountry'])   # United States

**Multi-language Example:**

.. code-block:: python

   # Get app data in Spanish
   app_es = scraper.app_analyze('com.whatsapp', lang='es')
   print(app_es['description'])  # Description in Spanish
   
   # Get app data for UK region
   app_uk = scraper.app_analyze('com.whatsapp', country='gb')

**Image Size Example:**

.. code-block:: python

   # Get large images
   app = scraper.app_analyze('com.whatsapp', assets='LARGE')
   print(app['icon'])  # URL with =w2048 parameter

app_get_field()
---------------

Get a single field value from app data.

**Signature:**

.. code-block:: python

   app_get_field(app_id, field, lang='en', country='', assets=None)

**Parameters:**

* ``app_id`` (str, required) - Google Play app ID
* ``field`` (str, required) - Field name to retrieve
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code
* ``assets`` (str, optional) - Image size

**Returns:**

Value of the requested field (type depends on field)

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   
   # Get title
   title = scraper.app_get_field('com.whatsapp', 'title')
   print(title)  # "WhatsApp Messenger"
   
   # Get score
   score = scraper.app_get_field('com.whatsapp', 'score')
   print(score)  # 4.2189474
   
   # Get daily installs
   daily = scraper.app_get_field('com.whatsapp', 'dailyInstalls')
   print(f"{daily:,}")  # 1,815,870

app_get_fields()
----------------

Get multiple field values from app data.

**Signature:**

.. code-block:: python

   app_get_fields(app_id, fields, lang='en', country='', assets=None)

**Parameters:**

* ``app_id`` (str, required) - Google Play app ID
* ``fields`` (list, required) - List of field names
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code
* ``assets`` (str, optional) - Image size

**Returns:**

Dictionary with requested fields and values

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   
   fields = ['title', 'developer', 'score', 'realInstalls', 'dailyInstalls']
   data = scraper.app_get_fields('com.whatsapp', fields)
   
   print(data)
   # {
   #     'title': 'WhatsApp Messenger',
   #     'developer': 'WhatsApp LLC',
   #     'score': 4.2189474,
   #     'realInstalls': 10931553905,
   #     'dailyInstalls': 1815870
   # }

app_print_field()
-----------------

Print a single field value to console.

**Signature:**

.. code-block:: python

   app_print_field(app_id, field, lang='en', country='', assets=None)

**Returns:**

None (prints to console)

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   scraper.app_print_field('com.whatsapp', 'title')
   # Output: title: WhatsApp Messenger

app_print_fields()
------------------

Print multiple field values to console.

**Signature:**

.. code-block:: python

   app_print_fields(app_id, fields, lang='en', country='', assets=None)

**Returns:**

None (prints to console)

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   fields = ['title', 'score', 'dailyInstalls']
   scraper.app_print_fields('com.whatsapp', fields)

app_print_all()
---------------

Print all 57 fields as formatted JSON to console.

**Signature:**

.. code-block:: python

   app_print_all(app_id, lang='en', country='', assets=None)

**Returns:**

None (prints to console)

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   scraper.app_print_all('com.whatsapp')
   # Outputs all 57 fields as formatted JSON

Available Fields
----------------

The App methods return 57 fields organized into categories:

**Basic Information (5 fields)**

* ``appId`` - Package identifier
* ``title`` - App name
* ``summary`` - Short description
* ``description`` - Full description
* ``appUrl`` - Play Store URL

**Category (4 fields)**

* ``genre`` - Primary category
* ``genreId`` - Category ID
* ``categories`` - All categories (array)
* ``available`` - Availability status (boolean)

**Release & Updates (3 fields)**

* ``released`` - Release date
* ``appAgeDays`` - Days since release (computed)
* ``lastUpdated`` - Last update date

**Media (5 fields)**

* ``icon`` - App icon URL
* ``headerImage`` - Header image URL
* ``screenshots`` - Screenshot URLs (array)
* ``video`` - Promotional video URL
* ``videoImage`` - Video thumbnail URL

**Install Statistics (10 fields)**

* ``installs`` - Install range string
* ``minInstalls`` - Minimum installs
* ``realInstalls`` - Exact install count
* ``dailyInstalls`` - Average daily installs (computed)
* ``minDailyInstalls`` - Min daily installs (computed)
* ``realDailyInstalls`` - Real daily installs (computed)
* ``monthlyInstalls`` - Average monthly installs (computed)
* ``minMonthlyInstalls`` - Min monthly installs (computed)
* ``realMonthlyInstalls`` - Real monthly installs (computed)

**Ratings (4 fields)**

* ``score`` - Average rating (0-5)
* ``ratings`` - Total number of ratings
* ``reviews`` - Total number of reviews
* ``histogram`` - Rating distribution [1★, 2★, 3★, 4★, 5★]

**Ads (2 fields)**

* ``adSupported`` - Supports ads (boolean)
* ``containsAds`` - Contains ads (boolean)

**Technical (7 fields)**

* ``version`` - Current version
* ``androidVersion`` - Minimum Android version
* ``maxAndroidApi`` - Maximum Android API level
* ``minAndroidApi`` - Minimum Android API level
* ``appBundle`` - Bundle identifier
* ``contentRating`` - Age rating
* ``contentRatingDescription`` - Rating description

**Updates (1 field)**

* ``whatsNew`` - Changelog (array)

**Privacy (2 fields)**

* ``permissions`` - Required permissions (object)
* ``dataSafety`` - Data safety info (array)

**Pricing (7 fields)**

* ``price`` - App price
* ``currency`` - Currency code
* ``free`` - Is free (boolean)
* ``offersIAP`` - Has in-app purchases (boolean)
* ``inAppProductPrice`` - IAP price range
* ``sale`` - On sale (boolean)
* ``originalPrice`` - Original price if on sale

**Developer (8 fields)**

* ``developer`` - Developer name
* ``developerId`` - Developer ID
* ``developerEmail`` - Contact email
* ``developerWebsite`` - Website URL
* ``developerAddress`` - Physical address
* ``developerPhone`` - Contact phone
* ``publisherCountry`` - Publisher country (computed)
* ``privacyPolicy`` - Privacy policy URL

Common Use Cases
----------------

App Analytics
^^^^^^^^^^^^^

.. code-block:: python

   scraper = GPlayScraper()
   app = scraper.app_analyze('com.whatsapp')
   
   print(f"App: {app['title']}")
   print(f"Total Installs: {app['realInstalls']:,}")
   print(f"Daily Installs: {app['dailyInstalls']:,}")
   print(f"Monthly Installs: {app['monthlyInstalls']:,}")
   print(f"Age: {app['appAgeDays']} days")
   print(f"Rating: {app['score']}/5 ({app['ratings']:,} ratings)")

Competitor Comparison
^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   apps = ['com.whatsapp', 'org.telegram.messenger', 'org.thoughtcrime.securesms']
   
   for app_id in apps:
       app = scraper.app_analyze(app_id)
       print(f"{app['title']}")
       print(f"  Installs: {app['realInstalls']:,}")
       print(f"  Rating: {app['score']}/5")
       print(f"  Daily Growth: {app['dailyInstalls']:,}")

Market Research
^^^^^^^^^^^^^^^

.. code-block:: python

   # Get key metrics for analysis
   fields = [
       'title', 'developer', 'score', 'ratings',
       'realInstalls', 'dailyInstalls', 'free', 'price'
   ]
   
   apps = ['com.app1', 'com.app2', 'com.app3']
   
   for app_id in apps:
       data = scraper.app_get_fields(app_id, fields)
       print(data)

See Also
--------

* :doc:`../fields` - Complete field reference
* :doc:`../examples` - More practical examples
* :doc:`../configuration` - Configuration options


================================================
FILE: docs/api/developer.rst
================================================
Developer Methods
=================

Get all apps from a specific developer or company.

Overview
--------

The Developer methods allow you to find all apps published by a developer, returning 11 fields per app.

Available Methods
-----------------

* ``developer_analyze()`` - Get all developer apps
* ``developer_get_field()`` - Get single field from all apps
* ``developer_get_fields()`` - Get multiple fields from all apps
* ``developer_print_field()`` - Print single field
* ``developer_print_fields()`` - Print multiple fields
* ``developer_print_all()`` - Print all apps as JSON

developer_analyze()
-------------------

Get all apps from a developer.

**Signature:**

.. code-block:: python

   developer_analyze(dev_id, count=100, lang='en', country='')

**Parameters:**

* ``dev_id`` (str, required) - Developer name or numeric ID
* ``count`` (int, optional) - Number of apps (default: 100)
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code

**Returns:**

List of dictionaries, each with 11 fields

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   
   # Using developer name
   apps = scraper.developer_analyze('Google LLC')
   
   for app in apps:
       print(f"{app['title']}")
       print(f"  Developer: {app['developer']}")
       print(f"  Rating: {app['score']}/5")
       print(f"  Free: {app['free']}")

**Using Numeric Developer ID:**

.. code-block:: python

   # Using numeric ID
   apps = scraper.developer_analyze('5700313618786177705')

Available Fields
----------------

Each app contains 11 fields:

* ``appId`` - Package identifier
* ``title`` - App name
* ``url`` - Play Store URL
* ``icon`` - App icon URL
* ``developer`` - Developer name
* ``description`` - App description
* ``score`` - Average rating (0-5)
* ``scoreText`` - Rating as text
* ``price`` - App price
* ``free`` - Is free (boolean)
* ``currency`` - Currency code

Common Use Cases
----------------

Portfolio Analysis
^^^^^^^^^^^^^^^^^^

.. code-block:: python

   scraper = GPlayScraper()
   apps = scraper.developer_analyze('Google LLC')
   
   # Calculate average rating
   avg_rating = sum(app['score'] for app in apps) / len(apps)
   
   # Count free vs paid
   free_count = sum(1 for app in apps if app['free'])
   paid_count = len(apps) - free_count
   
   print(f"Total apps: {len(apps)}")
   print(f"Average rating: {avg_rating:.2f}/5")
   print(f"Free apps: {free_count}, Paid apps: {paid_count}")

Competitive Analysis
^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   developers = ['Google LLC', 'Microsoft Corporation', 'Meta Platforms, Inc.']
   
   for dev in developers:
       apps = scraper.developer_analyze(dev)
       high_rated = [app for app in apps if app['score'] >= 4.5]
       
       print(f"\n{dev}:")
       print(f"  Total apps: {len(apps)}")
       print(f"  High-rated apps (4.5+): {len(high_rated)}")

See Also
--------

* :doc:`app` - Get detailed app information
* :doc:`similar` - Find similar apps


================================================
FILE: docs/api/list.rst
================================================
List Methods
============

Get top charts (top free, top paid, top grossing apps).

Overview
--------

The List methods access Play Store top charts, returning 14 fields per app.

list_analyze()
--------------

**Signature:**

.. code-block:: python

   list_analyze(collection='TOP_FREE', category='APPLICATION', 
                count=100, lang='en', country='')

**Parameters:**

* ``collection`` - 'TOP_FREE', 'TOP_PAID', 'TOP_GROSSING'
* ``category`` - App category (default: 'APPLICATION')
* ``count`` - Number of apps (default: 100, max: ~500)

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   
   # Top free games
   top_free = scraper.list_analyze('TOP_FREE', category='GAME', count=100)
   
   for i, app in enumerate(top_free, 1):
       print(f"{i}. {app['title']} - {app['score']}/5")

Available Fields
----------------

14 fields including: title, appId, url, icon, screenshots, developer, genre, installs, description, score, scoreText, price, free, currency

Collections
-----------

* **TOP_FREE** - Top free apps
* **TOP_PAID** - Top paid apps
* **TOP_GROSSING** - Highest earning apps

App Categories (36)
-------------------

* ``APPLICATION`` - All apps (default)
* ``ANDROID_WEAR`` - Android Wear apps
* ``ART_AND_DESIGN`` - Art & design
* ``AUTO_AND_VEHICLES`` - Auto & vehicles
* ``BEAUTY`` - Beauty
* ``BOOKS_AND_REFERENCE`` - Books & reference
* ``BUSINESS`` - Business
* ``COMICS`` - Comics
* ``COMMUNICATION`` - Communication
* ``DATING`` - Dating
* ``EDUCATION`` - Education
* ``ENTERTAINMENT`` - Entertainment
* ``EVENTS`` - Events
* ``FINANCE`` - Finance
* ``FOOD_AND_DRINK`` - Food & drink
* ``HEALTH_AND_FITNESS`` - Health & fitness
* ``HOUSE_AND_HOME`` - House & home
* ``LIBRARIES_AND_DEMO`` - Libraries & demo
* ``LIFESTYLE`` - Lifestyle
* ``MAPS_AND_NAVIGATION`` - Maps & navigation
* ``MEDICAL`` - Medical
* ``MUSIC_AND_AUDIO`` - Music & audio
* ``NEWS_AND_MAGAZINES`` - News & magazines
* ``PARENTING`` - Parenting
* ``PERSONALIZATION`` - Personalization
* ``PHOTOGRAPHY`` - Photography
* ``PRODUCTIVITY`` - Productivity
* ``SHOPPING`` - Shopping
* ``SOCIAL`` - Social
* ``SPORTS`` - Sports
* ``TOOLS`` - Tools
* ``TRAVEL_AND_LOCAL`` - Travel & local
* ``VIDEO_PLAYERS`` - Video players & editors
* ``WATCH_FACE`` - Watch faces
* ``WEATHER`` - Weather
* ``FAMILY`` - Family

Game Categories (18)
---------------------

* ``GAME`` - All games
* ``GAME_ACTION`` - Action games
* ``GAME_ADVENTURE`` - Adventure games
* ``GAME_ARCADE`` - Arcade games
* ``GAME_BOARD`` - Board games
* ``GAME_CARD`` - Card games
* ``GAME_CASINO`` - Casino games
* ``GAME_CASUAL`` - Casual games
* ``GAME_EDUCATIONAL`` - Educational games
* ``GAME_MUSIC`` - Music games
* ``GAME_PUZZLE`` - Puzzle games
* ``GAME_RACING`` - Racing games
* ``GAME_ROLE_PLAYING`` - Role playing games
* ``GAME_SIMULATION`` - Simulation games
* ``GAME_SPORTS`` - Sports games
* ``GAME_STRATEGY`` - Strategy games
* ``GAME_TRIVIA`` - Trivia games
* ``GAME_WORD`` - Word games

Example
-------

.. code-block:: python

   # Top paid communication apps
   top_paid = scraper.list_analyze('TOP_PAID', 
       category='COMMUNICATION', 
       count=50)


================================================
FILE: docs/api/reviews.rst
================================================
Reviews Methods
===============

Extract user reviews and ratings with sorting options.

Overview
--------

The Reviews methods allow you to get user reviews with 8 fields per review, including content, rating, and metadata.

Available Methods
-----------------

* ``reviews_analyze()`` - Get all reviews
* ``reviews_get_field()`` - Get single field from all reviews
* ``reviews_get_fields()`` - Get multiple fields from all reviews
* ``reviews_print_field()`` - Print single field
* ``reviews_print_fields()`` - Print multiple fields
* ``reviews_print_all()`` - Print all reviews as JSON

reviews_analyze()
-----------------

Get user reviews with all details.

**Signature:**

.. code-block:: python

   reviews_analyze(app_id, count=100, sort='NEWEST', lang='en', country='')

**Parameters:**

* ``app_id`` (str, required) - Google Play app ID
* ``count`` (int, optional) - Number of reviews (default: 100, max: ~1000+)
* ``sort`` (str, optional) - Sort order: 'NEWEST', 'RELEVANT', 'RATING' (default: 'NEWEST')
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code

**Returns:**

List of dictionaries, each with 8 fields

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   
   # Get newest reviews
   reviews = scraper.reviews_analyze('com.whatsapp', count=50, sort='NEWEST')
   
   for review in reviews:
       print(f"{review['userName']}: {review['score']}/5")
       print(f"Date: {review['at']}")
       print(f"Content: {review['content'][:100]}...")
       print(f"Helpful: {review['thumbsUpCount']} people")

Sort Options
^^^^^^^^^^^^

* **NEWEST** - Most recent reviews first (default)
* **RELEVANT** - Most helpful/relevant reviews
* **RATING** - Sorted by rating score

.. code-block:: python

   # Get most relevant reviews
   reviews = scraper.reviews_analyze('com.whatsapp', 
       count=100, 
       sort='RELEVANT')
   
   # Get reviews sorted by rating
   reviews = scraper.reviews_analyze('com.whatsapp', 
       count=100, 
       sort='RATING')

reviews_get_field()
-------------------

Get single field from all reviews.

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   
   # Get all usernames
   usernames = scraper.reviews_get_field('com.whatsapp', 'userName', count=10)
   
   # Get all scores
   scores = scraper.reviews_get_field('com.whatsapp', 'score', count=100)

reviews_get_fields()
--------------------

Get multiple fields from all reviews.

**Example:**

.. code-block:: python

   fields = ['userName', 'score', 'content', 'thumbsUpCount']
   reviews = scraper.reviews_get_fields('com.whatsapp', fields, count=50)

Available Fields
----------------

Each review contains 8 fields:

* ``reviewId`` - Unique review identifier
* ``userName`` - Reviewer's name
* ``userImage`` - Reviewer's profile image URL
* ``content`` - Review text
* ``score`` - Rating (1-5)
* ``thumbsUpCount`` - Number of helpful votes
* ``at`` - Review date (ISO format)
* ``appVersion`` - App version reviewed

Common Use Cases
----------------

Monitor Negative Reviews
^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   scraper = GPlayScraper()
   
   reviews = scraper.reviews_analyze('com.myapp', count=100, sort='NEWEST')
   negative = [r for r in reviews if r['score'] <= 2]
   
   print(f"Found {len(negative)} negative reviews")
   for review in negative:
       print(f"{review['userName']}: {review['score']}/5")
       print(f"  {review['content']}")

Sentiment Analysis
^^^^^^^^^^^^^^^^^^

.. code-block:: python

   reviews = scraper.reviews_analyze('com.whatsapp', count=500)
   
   # Count by rating
   rating_counts = {1: 0, 2: 0, 3: 0, 4: 0, 5: 0}
   for review in reviews:
       rating_counts[review['score']] += 1
   
   print("Rating distribution:")
   for rating, count in rating_counts.items():
       print(f"  {rating}★: {count} reviews")

Find Helpful Reviews
^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   reviews = scraper.reviews_analyze('com.whatsapp', 
       count=100, 
       sort='RELEVANT')
   
   # Get most helpful
   most_helpful = sorted(reviews, 
       key=lambda x: x['thumbsUpCount'], 
       reverse=True)[:10]
   
   for review in most_helpful:
       print(f"{review['thumbsUpCount']} helpful votes")
       print(f"  {review['content'][:100]}...")

Track Version Feedback
^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   reviews = scraper.reviews_analyze('com.myapp', count=200)
   
   # Group by app version
   by_version = {}
   for review in reviews:
       version = review['appVersion']
       if version not in by_version:
           by_version[version] = []
       by_version[version].append(review)
   
   # Analyze each version
   for version, version_reviews in by_version.items():
       avg_score = sum(r['score'] for r in version_reviews) / len(version_reviews)
       print(f"Version {version}: {avg_score:.2f}/5 ({len(version_reviews)} reviews)")

See Also
--------

* :doc:`app` - Get app information
* :doc:`../examples` - More examples


================================================
FILE: docs/api/search.rst
================================================
Search Methods
==============

Search for apps on Google Play Store by keyword and get results with 11 fields per app.

Overview
--------

The Search methods allow you to find apps by keyword, similar to using the Play Store search bar. Results include basic app information with 11 fields.

Available Methods
-----------------

* ``search_analyze()`` - Search and get all results
* ``search_get_field()`` - Get single field from all results
* ``search_get_fields()`` - Get multiple fields from all results
* ``search_print_field()`` - Print single field
* ``search_print_fields()`` - Print multiple fields
* ``search_print_all()`` - Print all results as JSON

search_analyze()
----------------

Search for apps and get complete results.

**Signature:**

.. code-block:: python

   search_analyze(query, count=100, lang='en', country='')

**Parameters:**

* ``query`` (str, required) - Search query string
* ``count`` (int, optional) - Number of results to return (default: 100, max: ~250)
* ``lang`` (str, optional) - Language code (default: 'en')
* ``country`` (str, optional) - Country code (default: '')

**Returns:**

List of dictionaries, each with 11 fields

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   results = scraper.search_analyze('messaging', count=10)
   
   for app in results:
       print(f"{app['title']} by {app['developer']}")
       print(f"  Rating: {app['score']}/5")
       print(f"  Free: {app['free']}")
       print(f"  URL: {app['url']}")

**Pagination Example:**

.. code-block:: python

   # Get top 100 results
   results = scraper.search_analyze('games', count=100)
   print(f"Found {len(results)} games")

search_get_field()
------------------

Get a single field from all search results.

**Signature:**

.. code-block:: python

   search_get_field(query, field, count=100, lang='en', country='')

**Parameters:**

* ``query`` (str, required) - Search query
* ``field`` (str, required) - Field name to retrieve
* ``count`` (int, optional) - Number of results
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code

**Returns:**

List of field values from all results

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   
   # Get all titles
   titles = scraper.search_get_field('messaging', 'title', count=10)
   print(titles)
   # ['WhatsApp Messenger', 'Telegram', 'Signal', ...]
   
   # Get all ratings
   scores = scraper.search_get_field('messaging', 'score', count=10)
   print(scores)
   # [4.2, 4.3, 4.5, ...]

search_get_fields()
-------------------

Get multiple fields from all search results.

**Signature:**

.. code-block:: python

   search_get_fields(query, fields, count=100, lang='en', country='')

**Parameters:**

* ``query`` (str, required) - Search query
* ``fields`` (list, required) - List of field names
* ``count`` (int, optional) - Number of results
* ``lang`` (str, optional) - Language code
* ``country`` (str, optional) - Country code

**Returns:**

List of dictionaries with requested fields

**Example:**

.. code-block:: python

   scraper = GPlayScraper()
   
   fields = ['title', 'developer', 'score', 'free']
   results = scraper.search_get_fields('games', fields, count=5)
   
   for app in results:
       print(f"{app['title']} - {app['score']}/5 - Free: {app['free']}")

search_print_field()
--------------------

Print single field from all search results.

**Signature:**

.. code-block:: python

   search_print_field(query, field, count=100, lang='en', country='')

**Returns:**

None (prints to console)

search_print_fields()
---------------------

Print multiple fields from all search results.

**Signature:**

.. code-block:: python

   search_print_fields(query, fields, count=100, lang='en', country='')

**Returns:**

None (prints to console)

search_print_all()
------------------

Print all search results as JSON.

**Signature:**

.. code-block:: python

   search_print_all(query, count=100, lang='en', country='')

**Returns:**

None (prints to console)

Available Fields
----------------

Each search result contains 11 fields:

* ``title`` - App name
* ``appId`` - Package identifier
* ``url`` - Play Store URL
* ``icon`` - App icon URL
* ``developer`` - Developer name
* ``summary`` - Short description
* ``score`` - Average rating (0-5)
* ``scoreText`` - Rating as text (e.g., "4.2★")
* ``price`` - App price
* ``free`` - Is free (boolean)
* ``currency`` - Currency code

Common Use Cases
----------------

Find Apps by Category
^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   scraper = GPlayScraper()
   
   # Find fitness apps
   fitness_apps = scraper.search_analyze('fitness tracker', count=20)
   
   # Filter by rating
   high_rated = [app for app in fitness_apps if app['score'] >= 4.5]
   
   for app in high_rated:
       print(f"{app['title']}: {app['score']}/5")

Market Research
^^^^^^^^^^^^^^^

.. code-block:: python

   # Research competitors
   results = scraper.search_analyze('photo editor', count=50)
   
   # Analyze free vs paid
   free_apps = [app for app in results if app['free']]
   paid_apps = [app for app in results if not app['free']]
   
   print(f"Free apps: {len(free_apps)}")
   print(f"Paid apps: {len(paid_apps)}")
   
   # Average ratings
   avg_free = sum(app['score'] for app in free_apps) / len(free_apps)
   avg_paid = sum(app['score'] for app in paid_apps) / len(paid_apps)
   
   print(f"Average free app rating: {avg_free:.2f}")
   print(f"Average paid app rating: {avg_paid:.2f}")

Discovery
^^^^^^^^^

.. code-block:: python

   # Discover trending apps
   keywords = ['ai', 'chatbot', 'productivity']
   
   for keyword in keywords:
       results = scraper.search_analyze(keyword, count=5)
       print(f"\nTop {keyword} apps:")
       for app in results:
           print(f"  {app['title']} - {app['score']}/5")

Multi-Language Search
^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   # Search in Spanish
   results_es = scraper.search_analyze('juegos', lang='es', count=10)
   
   # Search in French
   results_fr = scraper.search_analyze('jeux', lang='fr', count=10)
   
   # Regional search (UK)
   results_uk = scraper.search_analyze('games', country='gb', count=10)

See Also
--------

* :doc:`app` - Get detailed app information
* :doc:`../examples` - More practical examples
* :doc:`../configuration` - Configuration options


================================================
FILE: docs/api/similar.rst
================================================
Similar Methods
===============

Find apps similar to a given app (competitors or alternatives).

Overview
--------

The Similar methods help you discover competitor apps or alternatives, returning 11 fields per app.

similar_analyze()
-----------------

**Signature:**

.. code-block:: python

   similar_analyze(app_id, count=100, lang='en', country='')

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   similar = scraper.similar_analyze('com.whatsapp', count=20)
   
   for app in similar:
       print(f"{app['title']} - {app['score']}/5")

Available Fields
----------------

Same 11 fields as Developer Methods.

Common Use Cases
----------------

Competitor Analysis
^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   # Find competitors
   my_app = scraper.app_analyze('com.myapp')
   competitors = scraper.similar_analyze('com.myapp', count=10)
   
   print(f"My App: {my_app['score']}/5")
   print("\nCompetitors:")
   for comp in competitors:
       print(f"  {comp['title']}: {comp['score']}/5")


================================================
FILE: docs/api/suggest.rst
================================================
Suggest Methods
===============

Get search suggestions and autocomplete.

Overview
--------

The Suggest methods provide search suggestions, returning lists of strings.

suggest_analyze()
-----------------

Get search suggestions for a term.

**Signature:**

.. code-block:: python

   suggest_analyze(term, count=5, lang='en', country='')

**Example:**

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   suggestions = scraper.suggest_analyze('mine', count=10)
   
   print(suggestions)
   # ['minecraft', 'minesweeper', 'mineplex', ...]

suggest_nested()
----------------

Get nested suggestions (suggestions for suggestions).

**Signature:**

.. code-block:: python

   suggest_nested(term, count=5, lang='en', country='')

**Returns:**

Dictionary mapping terms to their suggestion lists

**Example:**

.. code-block:: python

   nested = scraper.suggest_nested('game', count=5)
   
   for term, suggestions in nested.items():
       print(f"\n{term}:")
       for suggestion in suggestions:
           print(f"  - {suggestion}")

Common Use Cases
----------------

Keyword Research
^^^^^^^^^^^^^^^^

.. code-block:: python

   # Find popular search terms
   keywords = ['fitness', 'photo', 'music']
   
   for keyword in keywords:
       suggestions = scraper.suggest_analyze(keyword, count=10)
       print(f"\n{keyword} suggestions:")
       for s in suggestions:
           print(f"  {s}")

Trend Discovery
^^^^^^^^^^^^^^^

.. code-block:: python

   # Discover trending topics
   term = "ai"
   suggestions = scraper.suggest_analyze(term, count=20)
   
   print(f"Popular '{term}' searches:")
   for suggestion in suggestions:
       print(f"  {suggestion}")


================================================
FILE: docs/conf.py
================================================
project = 'GPlay Scraper'
copyright = '2025, GPlay Scraper'
author = 'GPlay Scraper'
release = '1.0.5'

extensions = [
    'sphinx.ext.autodoc',
    'sphinx.ext.napoleon',
    'sphinx.ext.viewcode',
]

try:
    import sphinx_copybutton
    extensions.append('sphinx_copybutton')
except ImportError:
    pass

templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
language = 'en'

html_theme = 'sphinx_book_theme'

html_theme_options = {
    "repository_url": "https://github.com/mohammedcha/gplay-scraper",
    "repository_branch": "main",
    "use_repository_button": True,
    "use_issues_button": True,
    "use_edit_page_button": False,
    "use_download_button": True,
    "home_page_in_toc": True,
    "show_navbar_depth": 2,
    "show_toc_level": 2,
    "navigation_with_keys": True,
    "collapse_navbar": False,
    "logo": {
        "text": "GPlay Scraper",
    },
    "extra_footer": "<p>Built with ❤️ using Sphinx Book Theme</p>",
    "search_bar_text": "Search documentation...",
    "icon_links": [
        {
            "name": "GitHub",
            "url": "https://github.com/mohammedcha/gplay-scraper",
            "icon": "fa-brands fa-github",
            "type": "fontawesome",
        },
        {
            "name": "PyPI",
            "url": "https://pypi.org/project/gplay-scraper/",
            "icon": "fa-brands fa-python",
            "type": "fontawesome",
        },
    ],
}

pygments_style = 'monokai'
pygments_dark_style = 'monokai'

html_title = "GPlay Scraper"
html_static_path = ['_static']

html_logo = "_static/logo.png"
html_favicon = "_static/favicon.png"

if 'sphinx_copybutton' in extensions:
    copybutton_prompt_text = r">>> |\.\.\. |\$ |In \[\d*\]: | {2,5}\.\.\.: | {5,8}: "
    copybutton_prompt_is_regexp = True


================================================
FILE: docs/configuration.rst
================================================
Configuration
=============

Advanced configuration options for GPlay Scraper.

HTTP Client Selection
---------------------

Choose from 7 HTTP clients with automatic fallback.

.. code-block:: python

   from gplay_scraper import GPlayScraper

   # Default (requests)
   scraper = GPlayScraper()
   
   # Use curl_cffi (best for bypassing blocks)
   scraper = GPlayScraper(http_client='curl_cffi')
   
   # Use tls_client (advanced TLS fingerprinting)
   scraper = GPlayScraper(http_client='tls_client')
   
   # Use httpx (modern HTTP/2)
   scraper = GPlayScraper(http_client='httpx')

Available HTTP Clients
^^^^^^^^^^^^^^^^^^^^^^

1. **requests** - Default, most compatible
2. **curl_cffi** - Best for anti-bot bypass (Chrome 110 impersonation)
3. **tls_client** - Advanced TLS fingerprinting (Chrome 112)
4. **urllib3** - Low-level HTTP with connection pooling
5. **cloudscraper** - Cloudflare bypass
6. **aiohttp** - Async HTTP support
7. **httpx** - Modern HTTP/2 client

The library automatically falls back to the next available client if one fails.

Rate Limiting
-------------

Configure delay between requests to avoid rate limits.

.. code-block:: python

   from gplay_scraper import Config
   
   # Set rate limit delay (seconds)
   Config.RATE_LIMIT_DELAY = 2.0  # 2 seconds between requests
   
   # Or use default (1.0 second)
   Config.RATE_LIMIT_DELAY = 1.0

Language & Region
-----------------

Set default language and country for all requests.

.. code-block:: python

   from gplay_scraper import Config
   
   # Set default language
   Config.DEFAULT_LANGUAGE = 'es'  # Spanish
   
   # Set default country
   Config.DEFAULT_COUNTRY = 'mx'  # Mexico

Common Language Codes
^^^^^^^^^^^^^^^^^^^^^^

* ``en`` - English
* ``es`` - Spanish  
* ``fr`` - French
* ``de`` - German
* ``it`` - Italian
* ``pt`` - Portuguese
* ``ja`` - Japanese
* ``ko`` - Korean
* ``zh`` - Chinese
* ``ru`` - Russian
* ``ar`` - Arabic
* ``hi`` - Hindi

Common Country Codes
^^^^^^^^^^^^^^^^^^^^^

* ``us`` - United States
* ``gb`` - United Kingdom
* ``ca`` - Canada
* ``de`` - Germany
* ``fr`` - France
* ``es`` - Spain
* ``mx`` - Mexico
* ``jp`` - Japan
* ``kr`` - South Korea
* ``cn`` - China
* ``in`` - India
* ``br`` - Brazil

Request Timeout
---------------

Configure HTTP request timeout.

.. code-block:: python

   from gplay_scraper import Config
   
   # Set timeout (seconds)
   Config.DEFAULT_TIMEOUT = 30  # 30 seconds
   
   # Or use default (10 seconds)
   Config.DEFAULT_TIMEOUT = 10

Retry Configuration
-------------------

Configure automatic retry behavior.

.. code-block:: python

   from gplay_scraper import Config
   
   # Set number of retries
   Config.DEFAULT_RETRY_COUNT = 5  # Try 5 times
   
   # Or use default (3 retries)
   Config.DEFAULT_RETRY_COUNT = 3

Image Asset Sizes
-----------------

Configure default image size for all requests.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Small images (512px)
   app = scraper.app_analyze('com.whatsapp', assets='SMALL')
   
   # Medium images (1024px) - default
   app = scraper.app_analyze('com.whatsapp', assets='MEDIUM')
   
   # Large images (2048px)
   app = scraper.app_analyze('com.whatsapp', assets='LARGE')
   
   # Original size (maximum)
   app = scraper.app_analyze('com.whatsapp', assets='ORIGINAL')

Per-Request Configuration
--------------------------

Override defaults for specific requests.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Per-request language
   app_es = scraper.app_analyze('com.whatsapp', lang='es')
   app_fr = scraper.app_analyze('com.whatsapp', lang='fr')
   
   # Per-request country
   app_uk = scraper.app_analyze('com.whatsapp', country='gb')
   app_de = scraper.app_analyze('com.whatsapp', country='de')
   
   # Per-request images
   app_large = scraper.app_analyze('com.whatsapp', assets='LARGE')

Logging
-------

Configure logging level for debugging.

.. code-block:: python

   import logging
   
   # Enable debug logging
   logging.basicConfig(level=logging.DEBUG)
   
   # Enable info logging
   logging.basicConfig(level=logging.INFO)
   
   # Disable logging
   logging.basicConfig(level=logging.ERROR)

Complete Configuration Example
-------------------------------

.. code-block:: python

   from gplay_scraper import GPlayScraper, Config
   import logging
   
   # Configure library
   Config.RATE_LIMIT_DELAY = 2.0
   Config.DEFAULT_LANGUAGE = 'en'
   Config.DEFAULT_COUNTRY = 'us'
   Config.DEFAULT_TIMEOUT = 30
   Config.DEFAULT_RETRY_COUNT = 5
   
   # Configure logging
   logging.basicConfig(
       level=logging.INFO,
       format='%(asctime)s - %(levelname)s - %(message)s'
   )
   
   # Initialize with preferred HTTP client
   scraper = GPlayScraper(http_client='curl_cffi')
   
   # Use the scraper
   app = scraper.app_analyze('com.whatsapp')

Environment Variables
---------------------

You can also use environment variables for configuration.

.. code-block:: bash

   # Set in your shell or .env file
   export GPLAY_HTTP_CLIENT=curl_cffi
   export GPLAY_RATE_LIMIT=2.0
   export GPLAY_LANGUAGE=en
   export GPLAY_COUNTRY=us

Best Practices
--------------

1. **Use curl_cffi or tls_client** for better success rates
2. **Set rate limiting** to 2+ seconds for large batch operations
3. **Use field filtering** to reduce data transfer and parsing time
4. **Enable logging** during development, disable in production
5. **Handle exceptions** gracefully for production use
6. **Reuse scraper instance** instead of creating new ones

See Also
--------

* :doc:`error_handling` - Error handling guide
* :doc:`examples` - Practical examples


================================================
FILE: docs/error_handling.rst
================================================
Error Handling
==============

Guide to handling errors and exceptions in GPlay Scraper.

Exception Types
---------------

GPlay Scraper provides 6 custom exception types:

AppNotFoundError
^^^^^^^^^^^^^^^^

Raised when an app, developer, or resource is not found.

.. code-block:: python

   from gplay_scraper import GPlayScraper
   from gplay_scraper.exceptions import AppNotFoundError

   scraper = GPlayScraper()
   
   try:
       app = scraper.app_analyze('invalid.app.id')
   except AppNotFoundError as e:
       print(f"App not found: {e}")

NetworkError
^^^^^^^^^^^^

Raised when network or HTTP errors occur.

.. code-block:: python

   from gplay_scraper.exceptions import NetworkError

   try:
       app = scraper.app_analyze('com.whatsapp')
   except NetworkError as e:
       print(f"Network error: {e}")

DataParsingError
^^^^^^^^^^^^^^^^

Raised when JSON parsing or data extraction fails.

.. code-block:: python

   from gplay_scraper.exceptions import DataParsingError

   try:
       app = scraper.app_analyze('com.whatsapp')
   except DataParsingError as e:
       print(f"Parsing error: {e}")

RateLimitError
^^^^^^^^^^^^^^

Raised when rate limits are exceeded.

.. code-block:: python

   from gplay_scraper.exceptions import RateLimitError

   try:
       # Making too many requests too quickly
       for i in range(1000):
           app = scraper.app_analyze(f'com.app{i}')
   except RateLimitError as e:
       print(f"Rate limited: {e}")

InvalidAppIdError
^^^^^^^^^^^^^^^^^

Raised when input validation fails.

.. code-block:: python

   from gplay_scraper.exceptions import InvalidAppIdError

   try:
       app = scraper.app_analyze('')  # Empty app ID
   except InvalidAppIdError as e:
       print(f"Invalid input: {e}")

GPlayScraperError
^^^^^^^^^^^^^^^^^

Base exception for all library errors.

.. code-block:: python

   from gplay_scraper.exceptions import GPlayScraperError

   try:
       app = scraper.app_analyze('com.whatsapp')
   except GPlayScraperError as e:
       print(f"Library error: {e}")

Comprehensive Error Handling
-----------------------------

Handle all common exceptions.

.. code-block:: python

   from gplay_scraper import GPlayScraper
   from gplay_scraper.exceptions import (
       AppNotFoundError,
       NetworkError,
       DataParsingError,
       RateLimitError,
       InvalidAppIdError,
       GPlayScraperError
   )

   scraper = GPlayScraper()

   try:
       app = scraper.app_analyze('com.whatsapp')
   except InvalidAppIdError as e:
       print(f"Invalid app ID: {e}")
   except AppNotFoundError as e:
       print(f"App not found: {e}")
   except NetworkError as e:
       print(f"Network error: {e}")
   except DataParsingError as e:
       print(f"Parsing error: {e}")
   except RateLimitError as e:
       print(f"Rate limited: {e}")
   except GPlayScraperError as e:
       print(f"Unknown library error: {e}")

Automatic Retries
-----------------

The library automatically retries failed requests with HTTP client fallback.

.. code-block:: python

   from gplay_scraper import Config
   
   # Configure retries
   Config.DEFAULT_RETRY_COUNT = 5  # Try 5 times
   
   scraper = GPlayScraper()
   app = scraper.app_analyze('com.whatsapp')
   # Automatically retries up to 5 times if it fails
   # Switches HTTP clients between retries

Graceful Degradation
--------------------

Methods return None or empty lists on failure instead of crashing.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Returns None if app not found (after retries)
   app = scraper.app_analyze('invalid.app')
   if app is None:
       print("App not found")
   
   # Returns empty list if search fails
   results = scraper.search_analyze('invalid query')
   if not results:
       print("No results found")

Production Error Handling
--------------------------

Example for production use.

.. code-block:: python

   import logging
   from gplay_scraper import GPlayScraper, Config
   from gplay_scraper.exceptions import GPlayScraperError

   # Configure logging
   logging.basicConfig(
       level=logging.ERROR,
       format='%(asctime)s - %(levelname)s - %(message)s',
       filename='gplay_scraper.log'
   )

   logger = logging.getLogger(__name__)

   # Configure retries
   Config.DEFAULT_RETRY_COUNT = 5

   scraper = GPlayScraper(http_client='curl_cffi')

   def safe_analyze_app(app_id):
       """Safely analyze an app with error handling."""
       try:
           return scraper.app_analyze(app_id)
       except GPlayScraperError as e:
           logger.error(f"Failed to analyze {app_id}: {e}")
           return None

   # Use in production
   app_ids = ['com.app1', 'com.app2', 'com.app3']
   results = []
   
   for app_id in app_ids:
       app = safe_analyze_app(app_id)
       if app:
           results.append(app)
   
   print(f"Successfully analyzed {len(results)}/{len(app_ids)} apps")

Batch Processing with Error Handling
-------------------------------------

.. code-block:: python

   from gplay_scraper import GPlayScraper
   from gplay_scraper.exceptions import GPlayScraperError

   scraper = GPlayScraper()
   
   app_ids = ['com.app1', 'com.app2', 'invalid.app', 'com.app3']
   
   successful = []
   failed = []
   
   for app_id in app_ids:
       try:
           app = scraper.app_analyze(app_id)
           if app:
               successful.append(app)
       except GPlayScraperError as e:
           failed.append((app_id, str(e)))
   
   print(f"Successful: {len(successful)}")
   print(f"Failed: {len(failed)}")
   
   if failed:
       print("\nFailed apps:")
       for app_id, error in failed:
           print(f"  {app_id}: {error}")

Best Practices
--------------

1. **Always handle exceptions** in production code
2. **Use specific exceptions** when possible instead of catching all
3. **Log errors** for debugging and monitoring
4. **Implement retries** for transient failures
5. **Use graceful degradation** - continue processing even if some items fail
6. **Monitor error rates** to detect issues early

See Also
--------

* :doc:`configuration` - Configuration options
* :doc:`examples` - More practical examples


================================================
FILE: docs/examples.rst
================================================
Examples
========

Practical examples of using GPlay Scraper for common tasks.

App Analytics Dashboard
-----------------------

Track key metrics for your app.

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   app = scraper.app_analyze('com.myapp')
   
   print("=== App Analytics Dashboard ===")
   print(f"App: {app['title']}")
   print(f"Developer: {app['developer']}")
   print(f"Rating: {app['score']}/5 ({app['ratings']:,} ratings)")
   print(f"\nInstall Metrics:")
   print(f"  Total Installs: {app['realInstalls']:,}")
   print(f"  Daily Installs: {app['dailyInstalls']:,}")
   print(f"  Monthly Installs: {app['monthlyInstalls']:,}")
   print(f"  App Age: {app['appAgeDays']} days")
   print(f"\nRating Distribution:")
   hist = app['histogram']
   for i, count in enumerate(hist, 1):
       print(f"  {i}★: {count:,}")

Market Research
---------------

Analyze a market segment.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Search for fitness apps
   results = scraper.search_analyze('fitness tracker', count=100)
   
   # Filter by rating
   high_rated = [app for app in results if app['score'] >= 4.5]
   free_apps = [app for app in high_rated if app['free']]
   
   print(f"Total fitness tracker apps: {len(results)}")
   print(f"High-rated (4.5+): {len(high_rated)}")
   print(f"High-rated & Free: {len(free_apps)}")
   
   print("\nTop 5 Free High-Rated Apps:")
   for app in free_apps[:5]:
       print(f"  {app['title']}: {app['score']}/5")

Competitor Monitoring
---------------------

Track your competitors.

.. code-block:: python

   scraper = GPlayScraper()
   
   competitors = ['com.competitor1', 'com.competitor2', 'com.competitor3']
   
   print("Competitor Analysis")
   print("-" * 60)
   
   for app_id in competitors:
       app = scraper.app_analyze(app_id)
       reviews = scraper.reviews_analyze(app_id, count=100, sort='NEWEST')
       
       avg_recent_rating = sum(r['score'] for r in reviews) / len(reviews)
       
       print(f"\n{app['title']}")
       print(f"  Overall Rating: {app['score']}/5")
       print(f"  Recent Rating: {avg_recent_rating:.2f}/5")
       print(f"  Daily Installs: {app['dailyInstalls']:,}")
       print(f"  Total Installs: {app['realInstalls']:,}")

Review Sentiment Analysis
--------------------------

Analyze user feedback.

.. code-block:: python

   scraper = GPlayScraper()
   
   reviews = scraper.reviews_analyze('com.myapp', count=500)
   
   # Categorize by rating
   positive = [r for r in reviews if r['score'] >= 4]
   neutral = [r for r in reviews if r['score'] == 3]
   negative = [r for r in reviews if r['score'] <= 2]
   
   print("Review Sentiment Analysis")
   print(f"Total Reviews: {len(reviews)}")
   print(f"Positive (4-5★): {len(positive)} ({len(positive)/len(reviews)*100:.1f}%)")
   print(f"Neutral (3★): {len(neutral)} ({len(neutral)/len(reviews)*100:.1f}%)")
   print(f"Negative (1-2★): {len(negative)} ({len(negative)/len(reviews)*100:.1f}%)")
   
   # Show recent negative reviews
   print("\nRecent Negative Reviews:")
   for review in negative[:5]:
       print(f"  {review['userName']}: {review['score']}/5")
       print(f"    {review['content'][:100]}...")

Top Charts Tracking
-------------------

Monitor top charts positions.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Track top free games
   top_games = scraper.list_analyze('TOP_FREE', category='GAME', count=50)
   
   # Find your app's position
   my_app_id = 'com.mygame'
   position = next((i for i, app in enumerate(top_games, 1) 
                    if app['appId'] == my_app_id), None)
   
   if position:
       print(f"Your game is ranked #{position} in top free games!")
   else:
       print("Your game is not in top 50")
   
   # Show top 10
   print("\nTop 10 Free Games:")
   for i, app in enumerate(top_games[:10], 1):
       print(f"{i}. {app['title']} - {app['score']}/5")

Developer Portfolio Overview
-----------------------------

Analyze a developer's entire portfolio.

.. code-block:: python

   scraper = GPlayScraper()
   
   apps = scraper.developer_analyze('Google LLC')
   
   # Calculate metrics
   avg_rating = sum(app['score'] for app in apps) / len(apps)
   free_count = sum(1 for app in apps if app['free'])
   high_rated = [app for app in apps if app['score'] >= 4.5]
   
   print(f"Developer: Google LLC")
   print(f"Total Apps: {len(apps)}")
   print(f"Average Rating: {avg_rating:.2f}/5")
   print(f"Free Apps: {free_count}/{len(apps)}")
   print(f"High-Rated Apps (4.5+): {len(high_rated)}")
   
   # Best rated apps
   sorted_apps = sorted(apps, key=lambda x: x['score'], reverse=True)
   print("\nTop 5 Highest Rated:")
   for app in sorted_apps[:5]:
       print(f"  {app['title']}: {app['score']}/5")

Batch Data Collection
----------------------

Collect data for multiple apps efficiently.

.. code-block:: python

   import json
   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   
   app_ids = [
       'com.whatsapp',
       'org.telegram.messenger',
       'org.thoughtcrime.securesms',
       'com.discord'
   ]
   
   results = []
   for app_id in app_ids:
       # Get only the fields you need
       fields = ['title', 'developer', 'score', 'realInstalls', 'dailyInstalls']
       data = scraper.app_get_fields(app_id, fields)
       results.append(data)
   
   # Save to JSON
   with open('messaging_apps.json', 'w') as f:
       json.dump(results, f, indent=2)
   
   print(f"Collected data for {len(results)} apps")

Multi-Language Content
----------------------

Get localized app information.

.. code-block:: python

   scraper = GPlayScraper()
   
   languages = {
       'en': 'English',
       'es': 'Spanish',
       'fr': 'French',
       'de': 'German',
       'ja': 'Japanese'
   }
   
   for lang_code, lang_name in languages.items():
       app = scraper.app_analyze('com.whatsapp', lang=lang_code)
       print(f"\n{lang_name} ({lang_code}):")
       print(f"  Title: {app['title']}")
       print(f"  Summary: {app['summary']}")

Trend Discovery
---------------

Discover trending apps in a category.

.. code-block:: python

   scraper = GPlayScraper()
   
   # Get top free apps
   top_free = scraper.list_analyze('TOP_FREE', category='PRODUCTIVITY', count=100)
   
   # Filter for new apps (less than 180 days old)
   new_apps = [app for app in top_free if 'New' in app.get('description', '')]
   
   # Get apps with high install velocity
   trending = []
   for app in top_free[:20]:
       full_data = scraper.app_analyze(app['appId'])
       if full_data['dailyInstalls'] > 10000:
           trending.append(full_data)
   
   print("Trending Productivity Apps:")
   for app in trending:
       print(f"  {app['title']}")
       print(f"    Daily Installs: {app['dailyInstalls']:,}")
       print(f"    Rating: {app['score']}/5")

See Also
--------

* :doc:`quickstart` - Basic usage guide
* :doc:`api/app` - Complete API reference
* :doc:`configuration` - Configuration options


================================================
FILE: docs/fields.rst
================================================
Field Reference
===============

Complete reference of all 112 fields returned by GPlay Scraper.

App Fields (57 Fields)
-----------------------

Basic Information (5 fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* ``appId`` (string) - Package identifier (e.g., "com.whatsapp")
* ``title`` (string) - App name
* ``summary`` (string) - Short description
* ``description`` (string) - Full description
* ``appUrl`` (string) - Play Store URL

Category (4 fields)
^^^^^^^^^^^^^^^^^^^

* ``genre`` (string) - Primary category
* ``genreId`` (string) - Category ID
* ``categories`` (array) - All categories
* ``available`` (boolean) - Availability status

Release & Updates (3 fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* ``released`` (string) - Release date (e.g., "Oct 18, 2010")
* ``appAgeDays`` (integer) - Days since release (computed)
* ``lastUpdated`` (string) - Last update date

Media (5 fields)
^^^^^^^^^^^^^^^^

* ``icon`` (string) - App icon URL
* ``headerImage`` (string) - Header image URL
* ``screenshots`` (array) - Screenshot URLs
* ``video`` (string or null) - Promotional video URL
* ``videoImage`` (string or null) - Video thumbnail URL

Install Statistics (10 fields)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

* ``installs`` (string) - Install range (e.g., "10,000,000,000+")
* ``minInstalls`` (integer) - Minimum installs
* ``realInstalls`` (integer) - Exact install count
* ``dailyInstalls`` (integer) - Average daily installs (computed)
* ``minDailyInstalls`` (integer) - Min daily installs (computed)
* ``realDailyInstalls`` (integer) - Real daily installs (computed)
* ``monthlyInstalls`` (integer) - Average monthly installs (computed)
* ``minMonthlyInstalls`` (integer) - Min monthly installs (computed)
* ``realMonthlyInstalls`` (integer) - Real monthly installs (computed)

Ratings (4 fields)
^^^^^^^^^^^^^^^^^^

* ``score`` (float) - Average rating (0-5)
* ``ratings`` (integer) - Total ratings count
* ``reviews`` (integer) - Total reviews count
* ``histogram`` (array) - Rating distribution [1★, 2★, 3★, 4★, 5★]

Ads (2 fields)
^^^^^^^^^^^^^^

* ``adSupported`` (boolean) - Supports ads
* ``containsAds`` (boolean) - Contains ads

Technical (7 fields)
^^^^^^^^^^^^^^^^^^^^

* ``version`` (string) - Current version
* ``androidVersion`` (string) - Minimum Android version
* ``maxAndroidApi`` (integer) - Maximum Android API
* ``minAndroidApi`` (string or integer) - Minimum Android API
* ``appBundle`` (string) - Bundle identifier
* ``contentRating`` (string) - Age rating
* ``contentRatingDescription`` (string) - Rating description

Updates (1 field)
^^^^^^^^^^^^^^^^^

* ``whatsNew`` (array) - Changelog entries

Privacy (2 fields)
^^^^^^^^^^^^^^^^^^

* ``permissions`` (object) - Required permissions
* ``dataSafety`` (array) - Data safety information

Pricing (7 fields)
^^^^^^^^^^^^^^^^^^

* ``price`` (number) - App price
* ``currency`` (string) - Currency code
* ``free`` (boolean) - Is free
* ``offersIAP`` (boolean) - Has in-app purchases
* ``inAppProductPrice`` (string or null) - IAP price range
* ``sale`` (boolean) - On sale
* ``originalPrice`` (number or null) - Original price if on sale

Developer (8 fields)
^^^^^^^^^^^^^^^^^^^^

* ``developer`` (string) - Developer name
* ``developerId`` (string) - Developer ID
* ``developerEmail`` (string) - Contact email
* ``developerWebsite`` (string) - Website URL
* ``developerAddress`` (string) - Physical address
* ``developerPhone`` (string or null) - Contact phone
* ``publisherCountry`` (string) - Publisher country (computed)
* ``privacyPolicy`` (string) - Privacy policy URL

Search Fields (11 Fields)
--------------------------

* ``title`` - App name
* ``appId`` - Package identifier
* ``url`` - Play Store URL
* ``icon`` - App icon URL
* ``developer`` - Developer name
* ``summary`` - Short description
* ``score`` - Average rating (0-5)
* ``scoreText`` - Rating as text
* ``price`` - App price
* ``free`` - Is free (boolean)
* ``currency`` - Currency code

Review Fields (8 Fields)
-------------------------

* ``reviewId`` - Unique review ID
* ``userName`` - Reviewer name
* ``userImage`` - Reviewer image URL
* ``content`` - Review text
* ``score`` - Rating (1-5)
* ``thumbsUpCount`` - Helpful votes
* ``at`` - Review date (ISO format)
* ``appVersion`` - App version reviewed

Developer App Fields (11 Fields)
---------------------------------

Same as Search Fields.

Similar App Fields (11 Fields)
-------------------------------

Same as Search Fields.

List (Top Charts) Fields (14 Fields)
-------------------------------------

* ``title`` - App name
* ``appId`` - Package identifier
* ``url`` - Play Store URL
* ``icon`` - App icon URL
* ``screenshots`` - Screenshot URLs (array)
* ``developer`` - Developer name
* ``genre`` - Category/genre
* ``installs`` - Install count
* ``description`` - App description
* ``score`` - Average rating
* ``scoreText`` - Rating as text
* ``price`` - App price
* ``free`` - Is free (boolean)
* ``currency`` - Currency code

Computed Fields
---------------

The following 8 fields are computed at runtime:

**appAgeDays**
   Calculated as: ``(current_date - release_date).days``

**dailyInstalls**
   Calculated as: ``total_installs / days_since_release``

**minDailyInstalls**
   Calculated as: ``min_installs / days_since_release``

**realDailyInstalls**
   Calculated as: ``real_installs / days_since_release``

**monthlyInstalls**
   Calculated as: ``total_installs / (days_since_release / 30.44)``

**minMonthlyInstalls**
   Calculated as: ``min_installs / months_since_release``

**realMonthlyInstalls**
   Calculated as: ``real_installs / months_since_release``

**publisherCountry**
   Extracted from developer phone prefix or address

Field Count Summary
-------------------

* App: 57 fields
* Search: 11 fields
* Reviews: 8 fields
* Developer: 11 fields
* Similar: 11 fields
* List: 14 fields
* **Total: 112 unique fields**


================================================
FILE: docs/index.rst
================================================
GPlay Scraper Documentation
============================

A comprehensive Python library for scraping Google Play Store data with 40 methods across 7 categories.

Features
--------

* **57 app fields** including install analytics
* **40 methods** for different data types
* **7 HTTP clients** with automatic fallback
* **Multi-language** and **multi-region** support
* **Automatic retries** and error handling
* **Rate limiting** built-in

Quick Example
-------------

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   
   # Get complete app data (57 fields)
   app = scraper.app_analyze('com.whatsapp')
   print(app['title'])              # WhatsApp Messenger
   print(app['realInstalls'])       # 10931553905
   print(app['dailyInstalls'])      # 1815870
   print(app['publisherCountry'])   # United States

Table of Contents
-----------------

.. toctree::
   :maxdepth: 2
   :caption: Getting Started

   installation
   quickstart
   examples

.. toctree::
   :maxdepth: 2
   :caption: API Reference

   api/app
   api/search
   api/reviews
   api/developer
   api/similar
   api/list
   api/suggest

.. toctree::
   :maxdepth: 2
   :caption: Advanced

   configuration
   error_handling
   fields

Indices and tables
==================

* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`


================================================
FILE: docs/installation.rst
================================================
Installation
============

Requirements
------------

* Python 3.7 or higher
* pip package manager

Basic Installation
------------------

Install using pip:

.. code-block:: bash

   pip install gplay-scraper

This installs the library with the default HTTP client (requests).

Optional Dependencies
---------------------

For better performance and anti-bot protection, install additional HTTP clients:

.. code-block:: bash

   # Install all optional HTTP clients
   pip install httpx curl-cffi tls-client aiohttp cloudscraper

   # Or install individually as needed
   pip install httpx        # Modern HTTP/2 client
   pip install curl-cffi    # Best for bypassing blocks
   pip install tls-client   # Advanced TLS fingerprinting
   pip install aiohttp      # Async support
   pip install cloudscraper # Cloudflare bypass

Verify Installation
-------------------

Test your installation:

.. code-block:: python

   from gplay_scraper import GPlayScraper

   scraper = GPlayScraper()
   app = scraper.app_analyze('com.whatsapp')
   print(f"Successfully installed! Got: {app['title']}")

Development Installation
------------------------

To install from source:

.. code-block:: bash

   git clone https://github.com/yourusername/gplay-scraper.git
   cd gplay-scraper
   pip install -e .

Upgrading
---------

To upgrade to the latest version:

.. code-block:: bash

   pip install --upgrade gplay-scraper

Troubleshooting
---------------

ImportError
^^^^^^^^^^^

If you get an ImportError, ensure the package is installed:

.. code-block:: bash

   pip show gplay-scraper

HTTP Client Issues
^^^^^^^^^^^^^^^^^^

If you encounter HTTP errors, try installing alternative clients:

.. code-block:: bash

   pip install curl-cffi

Then specify the client:

.. code-block:: python

   from gplay_scraper import GPlayScraper
   
   scraper = GPlayScraper(http_client='curl_cffi')

Next Steps
----------

* :doc:`quickstart` - Get started with basic usage
* :doc:`examples` - See practical examples
* :doc:`api/app` - Explore the API reference


================================================
FILE: docs/quickstart.rst
================================================
Quick Start Guide
=================

This guide will get you started with GPlay Scraper in 5 minutes.

Basic Usage
-----------

Initialize the Scraper
^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   from gplay_scraper import GPlayScraper

   # Initialize once
   scraper = GPlayScraper()

Get App Data
^^^^^^^^^^^^

Extract complete app information with 57 fields:

.. code-block:: python

   # Get all app data
   app = scraper.app_analyze('com.whatsapp')
   
   # Access the data
   print(app['title'])              # App name
   print(app['developer'])          # Developer name
   print(app['score'])              # Rating (0-5)
   print(app['realInstalls'])       # Exact install count
   print(app['dailyInstalls'])      # Average daily installs
   print(app['publisherCountry'])   # Publisher country

Get Specific Fields
^^^^^^^^^^^^^^^^^^^

If you only need certain fields:

.. code-block:: python

   # Get single field
   title = scraper.app_get_field('com.whatsapp', 'title')
   
   # Get multiple fields
   fields = scraper.app_get_fields('com.whatsapp', 
       ['title', 'score', 'dailyInstalls'])

Search for Apps
^^^^^^^^^^^^^^^

Search the Play Store by keyword:

.. code-block:: python

   # Search for apps
   results = scraper.search_analyze('messaging', count=10)
   
   # Iterate through results
   for app in results:
       print(f"{app['title']} by {app['developer']}")
       print(f"  Rating: {app['score']}/5")
       print(f"  Free: {app['free']}")

Get Reviews
^^^^^^^^^^^

Extract user reviews with ratings:

.. code-block:: python

   # Get newest reviews
   reviews = scraper.reviews_analyze('com.whatsapp', 
       count=50, 
       sort='NEWEST')
   
   # Process reviews
   for review in reviews:
       print(f"{review['userName']}: {review['score']}/5")
       print(f"  {review['content'][:100]}...")

Get Developer Apps
^^^^^^^^^^^^^^^^^^

Find all apps from a developer:

.. code-block:: python

   # Get all apps from Google
   apps = scraper.developer_analyze('Google LLC')
   
   for app in apps:
       print(f"{app['title']} - {app['score']}/5")

Find Similar Apps
^^^^^^^^^^^^^^^^^

Discover competitor or similar apps:

.. code-block:: python

   # Find apps similar to WhatsApp
   similar = scraper.similar_analyze('com.whatsapp', count=20)
   
   for app in similar:
       print(f"{app['title']} - {app['score']}/5")

Get Top Charts
^^^^^^^^^^^^^^

Access top free, paid, or grossing apps:

.. code-block:: python

   # Top free games
   top_free = scraper.list_analyze('TOP_FREE', 
       category='GAME', 
       count=100)
   
   # Top paid apps
   top_paid = scraper.list_analyze('TOP_PAID', 
       category='APPLICATION', 
       count=50)
   
   # Top grossing
   top_grossing = scraper.list_analyze('TOP_GROSSING', count=100)

Get Search Suggestions
^^^^^^^^^^^^^^^^^^^^^^

Get autocomplete suggestions:

.. code-block:: python

   # Get suggestions
   suggestions = scraper.suggest_analyze('mine', count=10)
   print(suggestions)
   # ['minecraft', 'minesweeper', 'mineplex', ...]

Multi-Language Support
----------------------

Get data in different languages:

.. code-block:: python

   # Spanish
   app = scraper.app_analyze('com.whatsapp', lang='es')
   
   # French
   app = scraper.app_analyze('com.whatsapp', lang='fr')
   
   # Japanese
   app = scraper.app_analyze('com.whatsapp', lang='ja')

Regional Data
-------------

Get region-specific data:

.. code-block:: python

   # UK data
   app = scraper.app_analyze('com.whatsapp', country='gb')
   
   # Germany
   app = scraper.app_analyze('com.whatsapp', country='de')
   
   # Japan
   app = scraper.app_analyze('com.whatsapp', country='jp')

Image Sizes
-----------

Control image quality:

.. code-block:: python

   # Small images (512px)
   app = scraper.app_analyze('com.whatsapp', assets='SMALL')
   
   # Medium images (1024px) - default
   app = scraper.app_analyze('com.whatsapp', assets='MEDIUM')
   
   # Large images (2048px)
   app = scraper.app_analyze('com.whatsapp', assets='LARGE')
   
   # Original size
   app = scraper.app_analyze('com.whatsapp', assets='ORIGINAL')

Error Handling
--------------

Handle errors gracefully:

.. code-block:: python

   from gplay_scraper import GPlayScraper
   from gplay_scraper.exceptions import (
       AppNotFoundError,
       InvalidAppIdError,
       NetworkError
   )

   scraper = GPlayScraper()

   try:
       app = scraper.app_analyze('invalid.app.id')
   except InvalidAppIdError:
       print("Invalid app ID format")
   except AppNotFoundError:
       print("App not found on Play Store")
   except NetworkError:
       print("Network error occurred")

Common Patterns
---------------

Batch Processing
^^^^^^^^^^^^^^^^

.. code-block:: python

   app_ids = ['com.whatsapp', 'com.telegram', 'com.signal']
   
   for app_id in app_ids:
       app = scraper.app_analyze(app_id)
       print(f"{app['title']}: {app['realInstalls']:,} installs")

Market Research
^^^^^^^^^^^^^^^

.. code-block:: python

   # Find highly-rated messaging apps
   results = scraper.search_analyze('messaging', count=100)
   high_rated = [app for app in results if app['score'] >= 4.5]
   
   for app in high_rated:
       print(f"{app['title']}: {app['score']}/5")

Competitor Analysis
^^^^^^^^^^^^^^^^^^^

.. code-block:: python

   # Analyze your app vs competitors
   my_app = scraper.app_analyze('com.myapp')
   competitors = scraper.similar_analyze('com.myapp', count=10)
   
   print(f"My App: {my_app['score']}/5")
   print("\nCompetitors:")
   for comp in competitors:
       print(f"  {comp['title']}: {comp['score']}/5")

Review Monitoring
^^^^^^^^^^^^^^^^^

.. code-block:: python

   # Monitor negative reviews
   reviews = scraper.reviews_analyze('com.myapp', 
       count=100, 
       sort='NEWEST')
   
   negative = [r for r in reviews if r['score'] <= 2]
   
   for review in negative:
       print(f"{review['userName']}: {review['score']}/5")
       print(f"  {review['content']}")

Next Steps
----------

* :doc:`examples` - See more detailed examples
* :doc:`api/app` - Complete API reference
* :doc:`configuration` - Advanced configuration options
* :doc:`fields` - All available fields reference


================================================
FILE: docs/requirements.txt
================================================
sphinx>=7.0.0
sphinx-book-theme>=1.0.0
sphinx-copybutton>=0.5.0
sphinx-autobuild>=2021.3.14


================================================
FILE: examples/README.md
================================================
# Examples

This folder contains example scripts demonstrating all methods for each of the 7 method types.

## Files

### 1. app_methods_example.py
Demonstrates all 6 app methods:
- `app_analyze()` - Get all data as dictionary
- `app_get_field()` - Get single field value
- `app_get_fields()` - Get multiple fields
- `app_print_field()` - Print single field to console
- `app_print_fields()` - Print multiple fields to console
- `app_print_all()` - Print all data as JSON

### 2. search_methods_example.py
Demonstrates all 6 search methods:
- `search_analyze()` - Get all search results
- `search_get_field()` - Get single field from results
- `search_get_fields()` - Get multiple fields from results
- `search_print_field()` - Print single field from results
- `search_print_fields()` - Print multiple fields from results
- `search_print_all()` - Print all results as JSON

### 3. reviews_methods_example.py
Demonstrates all 6 reviews methods:
- `reviews_analyze()` - Get all reviews
- `reviews_get_field()` - Get single field from reviews
- `reviews_get_fields()` - Get multiple fields from reviews
- `reviews_print_field()` - Print single field from reviews
- `reviews_print_fields()` - Print multiple fields from reviews
- `reviews_print_all()` - Print all reviews as JSON

### 4. developer_methods_example.py
Demonstrates all 6 developer methods:
- `developer_analyze()` - Get all developer apps
- `developer_get_field()` - Get single field from apps
- `developer_get_fields()` - Get multiple fields from apps
- `developer_print_field()` - Print single field from apps
- `developer_print_fields()` - Print multiple fields from apps
- `developer_print_all()` - Print all apps as JSON

### 5. list_methods_example.py
Demonstrates all 6 list methods:
- `list_analyze()` - Get all top chart apps
- `list_get_field()` - Get single field from apps
- `list_get_fields()` - Get multiple fields from apps
- `list_print_field()` - Print single field from apps
- `list_print_fields()` - Print multiple fields from apps
- `list_print_all()` - Print all apps as JSON

### 6. similar_methods_example.py
Demonstrates all 6 similar methods:
- `similar_analyze()` - Get all similar apps
- `similar_get_field()` - Get single field from apps
- `similar_get_fields()` - Get multiple fields from apps
- `similar_print_field()` - Print single field from apps
- `similar_print_fields()` - Print multiple fields from apps
- `similar_print_all()` - Print all apps as JSON

### 7. suggest_methods_example.py
Demonstrates all 4 suggest methods:
- `suggest_analyze()` - Get search suggestions
- `suggest_nested()` - Get nested suggestions
- `suggest_print_all()` - Print suggestions as JSON
- `suggest_print_nested()` - Print nested suggestions as JSON

## Running Examples

```bash
# Run any example
python examples/app_methods_example.py
python examples/search_methods_example.py
python examples/reviews_methods_example.py
python examples/developer_methods_example.py
python examples/list_methods_example.py
python examples/similar_methods_example.py
python examples/suggest_methods_example.py
```

## Note

These examples are simple demonstrations. For more advanced use cases, check the documentation in the `README/` folder.


================================================
FILE: examples/app_methods_example.py
================================================
"""
App Methods Example
Demonstrates all 6 app methods for extracting app details

Parameters:
- app_id: App package name
- lang: Language code (default: 'en')
- country: Country code (default: 'us')
"""

from gplay_scraper import GPlayScraper

scraper = GPlayScraper()
app_id = "com.whatsapp"
lang = "en"
country = "us"

print("=== App Methods Example ===\n")

# 1. app_analyze() - Get all data as dictionary
print("1. app_analyze(app_id, lang='en', country='us')")
data = scraper.app_analyze(app_id, lang=lang, country=country)
print(f"   Retrieved {len(data)} fields")
print(f"   Title: {data['title']}")
print(f"   Score: {data['score']}")

# 2. app_get_field() - Get single field
print("\n2. app_get_field(app_id, field, lang='en', country='us')")
title = scraper.app_get_field(app_id, "title", lang=lang, country=country)
print(f"   Title: {title}")

# 3. app_get_fields() - Get multiple fields
print("\n3. app_get_fields(app_id, fields, lang='en', country='us')")
fields = scraper.app_get_fields(app_id, ["title", "score", "installs"], lang=lang, country=country)
print(f"   {fields}")

# 4. app_print_field() - Print single field
print("\n4. app_print_field(app_id, field, lang='en', country='us')")
scraper.app_print_field(app_id, "developer", lang=lang, country=country)

# 5. app_print_fields() - Print multiple fields
print("\n5. app_print_fields(app_id, fields, lang='en', country='us')")
scraper.app_print_fields(app_id, ["title", "score", "free"], lang=lang, country=country)

# 6. app_print_all() - Print all data as JSON
print("\n6. app_print_all(app_id, lang='en', country='us')")
scraper.app_print_all(app_id, lang=lang, country=country)


================================================
FILE: examples/developer_methods_example.py
================================================
"""
Developer Methods Example
Demonstrates all 6 developer methods for getting developer's apps

Parameters:
- dev_id: Developer ID (numeric or string)
- count: Number of apps (default: 100)
- lang: Language code (default: 'en')
- country: Country code (default: 'us')
"""

from gplay_scraper import GPlayScraper

scraper = GPlayScraper()
dev_id = "5700313618786177705"  # Google LLC
count = 20
lang = "en"
country = "us"

print("=== Developer Methods Example ===\n")

# 1. developer_analyze() - Get all developer apps
print("1. developer_analyze(dev_id, count=100, lang='en', country='us')")
apps = scraper.developer_analyze(dev_id, count=count, lang=lang, country=country)
print(f"   Found {len(apps)} apps")
print(f"   First app: {apps[0]['title']}")

# 2. developer_get_field() - Get single field from all apps
print("\n2. developer_get_field(dev_id, field, count=100, lang='en', country='us')")
titles = scraper.developer_get_field(dev_id, "title", count=count, lang=lang, country=country)
print(f"   Titles: {titles[:3]}")

# 3. developer_get_fields() - Get multiple fields from all apps
print("\n3. developer_get_fields(dev_id, fields, count=100, lang='en', country='us')")
apps_data = scraper.developer_get_fields(dev_id, ["title", "score"], count=10, lang=lang, country=country)
print(f"   First 2 apps: {apps_data[:2]}")

# 4. developer_print_field() - Print single field from all apps
print("\n4. developer_print_field(dev_id, field, count=100, lang='en', country='us')")
scraper.developer_print_field(dev_id, "title", count=5, lang=lang, country=country)

# 5. developer_print_fields() - Print multiple fields from all apps
print("\n5. developer_print_fields(dev_id, fields, count=100, lang='en', country='us')")
scraper.developer_print_fields(dev_id, ["title", "score"], count=5, lang=lang, country=country)

# 6. developer_print_all() - Print all developer apps as JSON
print("\n6. developer_print_all(dev_id, count=100, lang='en', country='us')")
scraper.developer_
Download .txt
gitextract_ihudd3jc/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   └── feature_request.md
│   ├── pull_request_template.md
│   └── workflows/
│       ├── docs.yml
│       └── test.yml
├── .gitignore
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README/
│   ├── APP_METHODS.md
│   ├── DEVELOPER_METHODS.md
│   ├── LIST_METHODS.md
│   ├── README.md
│   ├── REVIEWS_METHODS.md
│   ├── SEARCH_METHODS.md
│   ├── SIMILAR_METHODS.md
│   └── SUGGEST_METHODS.md
├── README.md
├── SECURITY.md
├── build_docs.py
├── docs/
│   ├── README.md
│   ├── api/
│   │   ├── app.rst
│   │   ├── developer.rst
│   │   ├── list.rst
│   │   ├── reviews.rst
│   │   ├── search.rst
│   │   ├── similar.rst
│   │   └── suggest.rst
│   ├── conf.py
│   ├── configuration.rst
│   ├── error_handling.rst
│   ├── examples.rst
│   ├── fields.rst
│   ├── index.rst
│   ├── installation.rst
│   ├── quickstart.rst
│   └── requirements.txt
├── examples/
│   ├── README.md
│   ├── app_methods_example.py
│   ├── developer_methods_example.py
│   ├── list_methods_example.py
│   ├── reviews_methods_example.py
│   ├── search_methods_example.py
│   ├── similar_methods_example.py
│   └── suggest_methods_example.py
├── gplay_scraper/
│   ├── __init__.py
│   ├── app.py
│   ├── config.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── gplay_methods.py
│   │   ├── gplay_parser.py
│   │   └── gplay_scraper.py
│   ├── exceptions.py
│   ├── models/
│   │   ├── __init__.py
│   │   └── element_specs.py
│   └── utils/
│       ├── __init__.py
│       ├── constants.py
│       ├── error_handling.py
│       ├── helpers.py
│       └── http_client.py
├── output/
│   ├── app_example.json
│   ├── developer_example.json
│   ├── list_example.json
│   ├── reviews_example.json
│   ├── search_example.json
│   ├── similar_example.json
│   └── suggest_example.json
├── requirements.txt
├── setup.py
└── tests/
    ├── __init__.py
    ├── test_app_methods.py
    ├── test_basic.py
    ├── test_developer_methods.py
    ├── test_list_methods.py
    ├── test_package.py
    ├── test_reviews_methods.py
    ├── test_search_methods.py
    ├── test_similar_methods.py
    └── test_suggest_methods.py
Download .txt
SYMBOL INDEX (276 symbols across 20 files)

FILE: build_docs.py
  function install_docs_requirements (line 9) | def install_docs_requirements():
  function build_html_docs (line 23) | def build_html_docs():
  function main (line 50) | def main():

FILE: gplay_scraper/app.py
  class GPlayScraper (line 12) | class GPlayScraper:
    method __init__ (line 28) | def __init__(self, http_client: str = None):
    method app_analyze (line 45) | def app_analyze(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE...
    method app_get_field (line 59) | def app_get_field(self, app_id: str, field: str, lang: str = Config.DE...
    method app_get_fields (line 74) | def app_get_fields(self, app_id: str, fields: List[str], lang: str = C...
    method app_print_field (line 89) | def app_print_field(self, app_id: str, field: str, lang: str = Config....
    method app_print_fields (line 101) | def app_print_fields(self, app_id: str, fields: List[str], lang: str =...
    method app_print_all (line 113) | def app_print_all(self, app_id: str, lang: str = Config.DEFAULT_LANGUA...
    method search_analyze (line 126) | def search_analyze(self, query: str, count: int = Config.DEFAULT_SEARC...
    method search_get_field (line 140) | def search_get_field(self, query: str, field: str, count: int = Config...
    method search_get_fields (line 155) | def search_get_fields(self, query: str, fields: List[str], count: int ...
    method search_print_field (line 170) | def search_print_field(self, query: str, field: str, count: int = Conf...
    method search_print_fields (line 182) | def search_print_fields(self, query: str, fields: List[str], count: in...
    method search_print_all (line 194) | def search_print_all(self, query: str, count: int = Config.DEFAULT_SEA...
    method reviews_analyze (line 207) | def reviews_analyze(self, app_id: str, count: int = Config.DEFAULT_REV...
    method reviews_get_field (line 223) | def reviews_get_field(self, app_id: str, field: str, count: int = Conf...
    method reviews_get_fields (line 240) | def reviews_get_fields(self, app_id: str, fields: List[str], count: in...
    method reviews_print_field (line 257) | def reviews_print_field(self, app_id: str, field: str, count: int = Co...
    method reviews_print_fields (line 271) | def reviews_print_fields(self, app_id: str, fields: List[str], count: ...
    method reviews_print_all (line 285) | def reviews_print_all(self, app_id: str, count: int = Config.DEFAULT_R...
    method developer_analyze (line 300) | def developer_analyze(self, dev_id: str, count: int = Config.DEFAULT_D...
    method developer_get_field (line 314) | def developer_get_field(self, dev_id: str, field: str, count: int = Co...
    method developer_get_fields (line 329) | def developer_get_fields(self, dev_id: str, fields: List[str], count: ...
    method developer_print_field (line 344) | def developer_print_field(self, dev_id: str, field: str, count: int = ...
    method developer_print_fields (line 356) | def developer_print_fields(self, dev_id: str, fields: List[str], count...
    method developer_print_all (line 368) | def developer_print_all(self, dev_id: str, count: int = Config.DEFAULT...
    method similar_analyze (line 381) | def similar_analyze(self, app_id: str, count: int = Config.DEFAULT_SIM...
    method similar_get_field (line 395) | def similar_get_field(self, app_id: str, field: str, count: int = Conf...
    method similar_get_fields (line 410) | def similar_get_fields(self, app_id: str, fields: List[str], count: in...
    method similar_print_field (line 425) | def similar_print_field(self, app_id: str, field: str, count: int = Co...
    method similar_print_fields (line 437) | def similar_print_fields(self, app_id: str, fields: List[str], count: ...
    method similar_print_all (line 449) | def similar_print_all(self, app_id: str, count: int = Config.DEFAULT_S...
    method list_analyze (line 462) | def list_analyze(self, collection: str = Config.DEFAULT_LIST_COLLECTIO...
    method list_get_field (line 477) | def list_get_field(self, collection: str, field: str, category: str = ...
    method list_get_fields (line 493) | def list_get_fields(self, collection: str, fields: List[str], category...
    method list_print_field (line 509) | def list_print_field(self, collection: str, field: str, category: str ...
    method list_print_fields (line 522) | def list_print_fields(self, collection: str, fields: List[str], catego...
    method list_print_all (line 535) | def list_print_all(self, collection: str = Config.DEFAULT_LIST_COLLECT...
    method suggest_analyze (line 549) | def suggest_analyze(self, term: str, count: int = Config.DEFAULT_SUGGE...
    method suggest_nested (line 563) | def suggest_nested(self, term: str, count: int = Config.DEFAULT_SUGGES...
    method suggest_print_all (line 577) | def suggest_print_all(self, term: str, count: int = Config.DEFAULT_SUG...
    method suggest_print_nested (line 588) | def suggest_print_nested(self, term: str, count: int = Config.DEFAULT_...

FILE: gplay_scraper/config.py
  class Config (line 10) | class Config:
    method get_headers (line 97) | def get_headers(cls, user_agent: str = None) -> Dict[str, str]:
    method get_image_size (line 111) | def get_image_size(cls, size: str = None) -> str:

FILE: gplay_scraper/core/gplay_methods.py
  class AppMethods (line 27) | class AppMethods:
    method __init__ (line 29) | def __init__(self, http_client: str = None):
    method app_analyze (line 39) | def app_analyze(self, app_id: str, lang: str = Config.DEFAULT_LANGUAGE...
    method app_get_field (line 62) | def app_get_field(self, app_id: str, field: str, lang: str = Config.DE...
    method app_get_fields (line 78) | def app_get_fields(self, app_id: str, fields: List[str], lang: str = C...
    method app_print_field (line 95) | def app_print_field(self, app_id: str, field: str, lang: str = Config....
    method app_print_fields (line 112) | def app_print_fields(self, app_id: str, fields: List[str], lang: str =...
    method app_print_all (line 130) | def app_print_all(self, app_id: str, lang: str = Config.DEFAULT_LANGUA...
  class SearchMethods (line 146) | class SearchMethods:
    method __init__ (line 148) | def __init__(self, http_client: str = None):
    method search_analyze (line 158) | def search_analyze(self, query: str, count: int = Config.DEFAULT_SEARC...
    method search_get_field (line 181) | def search_get_field(self, query: str, field: str, count: int = Config...
    method search_get_fields (line 198) | def search_get_fields(self, query: str, fields: List[str], count: int ...
    method search_print_field (line 215) | def search_print_field(self, query: str, field: str, count: int = Conf...
    method search_print_fields (line 233) | def search_print_fields(self, query: str, fields: List[str], count: in...
    method search_print_all (line 253) | def search_print_all(self, query: str, count: int = Config.DEFAULT_SEA...
  class ReviewsMethods (line 270) | class ReviewsMethods:
    method __init__ (line 272) | def __init__(self, http_client: str = None):
    method reviews_analyze (line 282) | def reviews_analyze(self, app_id: str, count: int = Config.DEFAULT_REV...
    method reviews_get_field (line 315) | def reviews_get_field(self, app_id: str, field: str, count: int = Conf...
    method reviews_get_fields (line 334) | def reviews_get_fields(self, app_id: str, fields: List[str], count: in...
    method reviews_print_field (line 353) | def reviews_print_field(self, app_id: str, field: str, count: int = Co...
    method reviews_print_fields (line 373) | def reviews_print_fields(self, app_id: str, fields: List[str], count: ...
    method reviews_print_all (line 394) | def reviews_print_all(self, app_id: str, count: int = Config.DEFAULT_R...
  class DeveloperMethods (line 412) | class DeveloperMethods:
    method __init__ (line 414) | def __init__(self, http_client: str = None):
    method developer_analyze (line 424) | def developer_analyze(self, dev_id: str, count: int = Config.DEFAULT_D...
    method developer_get_field (line 447) | def developer_get_field(self, dev_id: str, field: str, count: int = Co...
    method developer_get_fields (line 464) | def developer_get_fields(self, dev_id: str, fields: List[str], count: ...
    method developer_print_field (line 481) | def developer_print_field(self, dev_id: str, field: str, count: int = ...
    method developer_print_fields (line 499) | def developer_print_fields(self, dev_id: str, fields: List[str], count...
    method developer_print_all (line 519) | def developer_print_all(self, dev_id: str, count: int = Config.DEFAULT...
  class SimilarMethods (line 535) | class SimilarMethods:
    method __init__ (line 537) | def __init__(self, http_client: str = None):
    method similar_analyze (line 547) | def similar_analyze(self, app_id: str, count: int = Config.DEFAULT_SIM...
    method similar_get_field (line 570) | def similar_get_field(self, app_id: str, field: str, count: int = Conf...
    method similar_get_fields (line 587) | def similar_get_fields(self, app_id: str, fields: List[str], count: in...
    method similar_print_field (line 604) | def similar_print_field(self, app_id: str, field: str, count: int = Co...
    method similar_print_fields (line 622) | def similar_print_fields(self, app_id: str, fields: List[str], count: ...
    method similar_print_all (line 642) | def similar_print_all(self, app_id: str, count: int = Config.DEFAULT_S...
  class ListMethods (line 658) | class ListMethods:
    method __init__ (line 660) | def __init__(self, http_client: str = None):
    method list_analyze (line 670) | def list_analyze(self, collection: str = Config.DEFAULT_LIST_COLLECTIO...
    method list_get_field (line 688) | def list_get_field(self, collection: str, field: str, category: str = ...
    method list_get_fields (line 706) | def list_get_fields(self, collection: str, fields: List[str], category...
    method list_print_field (line 724) | def list_print_field(self, collection: str, field: str, category: str ...
    method list_print_fields (line 743) | def list_print_fields(self, collection: str, fields: List[str], catego...
    method list_print_all (line 764) | def list_print_all(self, collection: str = Config.DEFAULT_LIST_COLLECT...
  class SuggestMethods (line 781) | class SuggestMethods:
    method __init__ (line 783) | def __init__(self, http_client: str = None):
    method suggest_analyze (line 793) | def suggest_analyze(self, term: str, count: int = Config.DEFAULT_SUGGE...
    method suggest_nested (line 816) | def suggest_nested(self, term: str, count: int = Config.DEFAULT_SUGGES...
    method suggest_print_all (line 842) | def suggest_print_all(self, term: str, count: int = Config.DEFAULT_SUG...
    method suggest_print_nested (line 858) | def suggest_print_nested(self, term: str, count: int = Config.DEFAULT_...

FILE: gplay_scraper/core/gplay_parser.py
  class AppParser (line 18) | class AppParser:
    method parse_app_data (line 21) | def parse_app_data(self, dataset: Dict, app_id: str, scraper=None, ass...
    method format_app_data (line 142) | def format_app_data(self, details: dict) -> dict:
  class SearchParser (line 212) | class SearchParser:
    method parse_search_results (line 216) | def parse_search_results(self, dataset: Dict, count: int) -> List[Dict]:
    method extract_search_result (line 244) | def extract_search_result(self, data) -> Dict:
    method format_search_result (line 262) | def format_search_result(self, result: dict) -> dict:
    method extract_pagination_token (line 286) | def extract_pagination_token(self, dataset: Dict) -> str:
    method parse_html_content (line 308) | def parse_html_content(self, html_content: str) -> Dict:
  class ReviewsParser (line 345) | class ReviewsParser:
    method parse_reviews_response (line 349) | def parse_reviews_response(self, content: str) -> Tuple[List[Dict], Op...
    method extract_review_data (line 406) | def extract_review_data(self, review_raw) -> Optional[Dict]:
    method parse_multiple_responses (line 436) | def parse_multiple_responses(self, dataset: Dict) -> List[Dict]:
    method format_reviews_data (line 466) | def format_reviews_data(self, reviews_data: List[Dict]) -> List[Dict]:
  class DeveloperParser (line 493) | class DeveloperParser:
    method parse_developer_data (line 497) | def parse_developer_data(self, dataset: Dict, dev_id: str) -> List[Dict]:
    method format_developer_data (line 547) | def format_developer_data(self, apps_data: List[Dict]) -> List[Dict]:
  class SimilarParser (line 577) | class SimilarParser:
    method parse_similar_data (line 581) | def parse_similar_data(self, dataset: Dict) -> List[Dict]:
    method format_similar_data (line 620) | def format_similar_data(self, apps_data: List[Dict]) -> List[Dict]:
  class ListParser (line 650) | class ListParser:
    method parse_list_data (line 654) | def parse_list_data(self, dataset: Dict, count: int) -> List[Dict]:
    method format_list_data (line 684) | def format_list_data(self, apps_data: List[Dict]) -> List[Dict]:
  class SuggestParser (line 717) | class SuggestParser:
    method parse_suggestions (line 721) | def parse_suggestions(self, dataset: Dict) -> List[str]:
    method format_suggestions (line 733) | def format_suggestions(self, suggestions: List[str]) -> List[str]:

FILE: gplay_scraper/core/gplay_scraper.py
  class AppScraper (line 16) | class AppScraper:
    method __init__ (line 30) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method fetch_playstore_page (line 39) | def fetch_playstore_page(self, app_id: str, lang: str = Config.DEFAULT...
    method fetch_fallback_data (line 52) | def fetch_fallback_data(self, app_id: str, gl: str = None, no_locale: ...
    method scrape_play_store_data (line 86) | def scrape_play_store_data(self, app_id: str, lang: str = Config.DEFAU...
  class SearchScraper (line 120) | class SearchScraper:
    method __init__ (line 133) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method fetch_playstore_search (line 143) | def fetch_playstore_search(self, query: str, count: int, lang: str = C...
    method scrape_play_store_data (line 169) | def scrape_play_store_data(self, query: str, count: int = Config.DEFAU...
    method _get_nested_value (line 216) | def _get_nested_value(self, data, path, default=None):
  class ReviewsScraper (line 235) | class ReviewsScraper:
    method __init__ (line 248) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method fetch_reviews_batch (line 257) | def fetch_reviews_batch(self, app_id: str, lang: str = Config.DEFAULT_...
    method scrape_reviews_data (line 278) | def scrape_reviews_data(self, app_id: str, count: int = Config.DEFAULT...
  class DeveloperScraper (line 337) | class DeveloperScraper:
    method __init__ (line 350) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method fetch_developer_page (line 359) | def fetch_developer_page(self, dev_id: str, lang: str = Config.DEFAULT...
    method scrape_play_store_data (line 375) | def scrape_play_store_data(self, dev_id: str, lang: str = Config.DEFAU...
  class SimilarScraper (line 408) | class SimilarScraper:
    method __init__ (line 421) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method fetch_similar_page (line 430) | def fetch_similar_page(self, app_id: str, lang: str = Config.DEFAULT_L...
    method scrape_play_store_data (line 446) | def scrape_play_store_data(self, app_id: str, lang: str = Config.DEFAU...
  class ListScraper (line 491) | class ListScraper:
    method __init__ (line 504) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method scrape_play_store_data (line 515) | def scrape_play_store_data(self, collection: str, category: str = Conf...
  class SuggestScraper (line 543) | class SuggestScraper:
    method __init__ (line 556) | def __init__(self, rate_limit_delay: float = None, http_client: str = ...
    method scrape_suggestions (line 568) | def scrape_suggestions(self, term: str, lang: str = Config.DEFAULT_LAN...

FILE: gplay_scraper/exceptions.py
  class GPlayScraperError (line 7) | class GPlayScraperError(Exception):
  class InvalidAppIdError (line 12) | class InvalidAppIdError(GPlayScraperError):
  class AppNotFoundError (line 17) | class AppNotFoundError(GPlayScraperError):
  class RateLimitError (line 22) | class RateLimitError(GPlayScraperError):
  class NetworkError (line 27) | class NetworkError(GPlayScraperError):
  class DataParsingError (line 32) | class DataParsingError(GPlayScraperError):

FILE: gplay_scraper/models/element_specs.py
  function parse_permissions (line 14) | def parse_permissions(perms_data: Any) -> Dict[str, List[str]]:
  function nested_lookup (line 108) | def nested_lookup(obj: Any, key_list: List) -> Any:
  function format_image_url (line 139) | def format_image_url(url: str, size: str = None) -> str:
  class ElementSpec (line 166) | class ElementSpec:
    method __init__ (line 197) | def __init__(
    method extract_content (line 212) | def extract_content(self, source: dict, assets: str = None) -> Any:
  class ElementSpecs (line 260) | class ElementSpecs:

FILE: gplay_scraper/utils/error_handling.py
  function retry_on_not_found (line 12) | def retry_on_not_found(max_retries=Config.DEFAULT_RETRY_COUNT, delay=1.0):
  function handle_network_errors (line 53) | def handle_network_errors(return_empty=False):
  function handle_parsing_errors (line 85) | def handle_parsing_errors(return_empty=False):
  function handle_rate_limit (line 117) | def handle_rate_limit():
  function validate_inputs (line 159) | def validate_inputs():
  function safe_print (line 199) | def safe_print():
  function comprehensive_error_handler (line 234) | def comprehensive_error_handler(return_empty=False):

FILE: gplay_scraper/utils/helpers.py
  function unescape_text (line 20) | def unescape_text(s: Optional[str]) -> Optional[str]:
  function clean_json_string (line 44) | def clean_json_string(json_str: str) -> str:
  function alternative_json_clean (line 76) | def alternative_json_clean(json_str: str) -> str:
  function parse_release_date (line 137) | def parse_release_date(release_date_str: Optional[str]) -> Optional[date...
  function calculate_app_age (line 165) | def calculate_app_age(release_date_str: Optional[str], current_date: dat...
  function parse_installs_string (line 198) | def parse_installs_string(installs_str: str) -> Optional[int]:
  function calculate_daily_installs (line 229) | def calculate_daily_installs(install_count, release_date_str: Optional[s...
  function calculate_monthly_installs (line 272) | def calculate_monthly_installs(install_count, release_date_str: Optional...
  function tamp_to_date (line 319) | def tamp_to_date(value) -> str:
  function get_publisher_country (line 348) | def get_publisher_country(phone: Optional[str], address: Optional[str]) ...
  function add_count (line 392) | def add_count(address):
  function pho_count (line 435) | def pho_count(phone):

FILE: gplay_scraper/utils/http_client.py
  class HttpClient (line 25) | class HttpClient:
    method __init__ (line 47) | def __init__(self, rate_limit_delay: float = None, client_type: str = ...
    method _setup_client (line 65) | def _setup_client(self):
    method _try_requests (line 100) | def _try_requests(self):
    method _try_curl_cffi (line 116) | def _try_curl_cffi(self):
    method _try_tls_client (line 138) | def _try_tls_client(self):
    method _try_urllib3 (line 162) | def _try_urllib3(self):
    method _try_cloudscraper (line 184) | def _try_cloudscraper(self):
    method _try_aiohttp (line 206) | def _try_aiohttp(self):
    method _try_httpx (line 230) | def _try_httpx(self):
    method fetch_app_page (line 254) | def fetch_app_page(self, app_id: str, lang: str = Config.DEFAULT_LANGU...
    method fetch_app_page_no_locale (line 296) | def fetch_app_page_no_locale(self, app_id: str) -> str:
    method fetch_search_page (line 316) | def fetch_search_page(self, query: str = None, token: str = None, need...
    method fetch_reviews_batch (line 379) | def fetch_reviews_batch(self, app_id: str, lang: str = Config.DEFAULT_...
    method fetch_developer_page (line 421) | def fetch_developer_page(self, dev_id: str, lang: str = Config.DEFAULT...
    method fetch_cluster_page (line 462) | def fetch_cluster_page(self, cluster_url: str, lang: str = Config.DEFA...
    method fetch_list_page (line 490) | def fetch_list_page(self, collection: str, category: str = Config.DEFA...
    method fetch_suggest_page (line 527) | def fetch_suggest_page(self, term: str, lang: str = Config.DEFAULT_LAN...
    method _make_request (line 563) | def _make_request(self, method: str, url: str, **kwargs):
    method _try_request_with_client (line 608) | def _try_request_with_client(self, client_type: str, method: str, url:...
    method _async_request (line 727) | async def _async_request(self, method: str, url: str, **kwargs):
    method _is_404_error (line 763) | def _is_404_error(self, error: Exception) -> bool:
    method _try_next_client (line 783) | def _try_next_client(self):
    method rate_limit (line 807) | def rate_limit(self):

FILE: tests/test_app_methods.py
  class TestAppMethods (line 8) | class TestAppMethods(unittest.TestCase):
    method setUpClass (line 11) | def setUpClass(cls):
    method test_app_analyze (line 17) | def test_app_analyze(self):
    method test_app_get_field (line 35) | def test_app_get_field(self):
    method test_app_get_fields (line 48) | def test_app_get_fields(self):
    method test_app_print_field (line 63) | def test_app_print_field(self):
    method test_app_print_fields (line 75) | def test_app_print_fields(self):
    method test_app_print_all (line 87) | def test_app_print_all(self):

FILE: tests/test_basic.py
  class TestBasicFunctionality (line 11) | class TestBasicFunctionality(unittest.TestCase):
    method test_import_gplay_scraper (line 14) | def test_import_gplay_scraper(self):
    method test_scraper_initialization (line 19) | def test_scraper_initialization(self):
    method test_scraper_initialization_with_http_client (line 24) | def test_scraper_initialization_with_http_client(self):
    method test_scraper_has_required_methods (line 38) | def test_scraper_has_required_methods(self):

FILE: tests/test_developer_methods.py
  class TestDeveloperMethods (line 12) | class TestDeveloperMethods(unittest.TestCase):
    method setUp (line 15) | def setUp(self):
    method test_developer_analyze (line 23) | def test_developer_analyze(self):
    method test_developer_get_field (line 38) | def test_developer_get_field(self):
    method test_developer_get_fields (line 52) | def test_developer_get_fields(self):
    method test_developer_print_field (line 66) | def test_developer_print_field(self):
    method test_developer_print_fields (line 78) | def test_developer_print_fields(self):
    method test_developer_print_all (line 90) | def test_developer_print_all(self):

FILE: tests/test_list_methods.py
  class TestListMethods (line 12) | class TestListMethods(unittest.TestCase):
    method setUp (line 15) | def setUp(self):
    method test_list_analyze (line 24) | def test_list_analyze(self):
    method test_list_get_field (line 39) | def test_list_get_field(self):
    method test_list_get_fields (line 53) | def test_list_get_fields(self):
    method test_list_print_field (line 67) | def test_list_print_field(self):
    method test_list_print_fields (line 79) | def test_list_print_fields(self):
    method test_list_print_all (line 91) | def test_list_print_all(self):

FILE: tests/test_package.py
  class TestPackageFunctionality (line 17) | class TestPackageFunctionality(unittest.TestCase):
    method test_import_gplay_scraper (line 20) | def test_import_gplay_scraper(self):
    method test_scraper_initialization (line 25) | def test_scraper_initialization(self):
    method test_http_clients (line 30) | def test_http_clients(self):
    method test_all_methods_exist (line 46) | def test_all_methods_exist(self):

FILE: tests/test_reviews_methods.py
  class TestReviewsMethods (line 12) | class TestReviewsMethods(unittest.TestCase):
    method setUp (line 15) | def setUp(self):
    method test_reviews_analyze (line 24) | def test_reviews_analyze(self):
    method test_reviews_get_field (line 40) | def test_reviews_get_field(self):
    method test_reviews_get_fields (line 54) | def test_reviews_get_fields(self):
    method test_reviews_print_field (line 68) | def test_reviews_print_field(self):
    method test_reviews_print_fields (line 80) | def test_reviews_print_fields(self):
    method test_reviews_print_all (line 92) | def test_reviews_print_all(self):

FILE: tests/test_search_methods.py
  class TestSearchMethods (line 12) | class TestSearchMethods(unittest.TestCase):
    method setUpClass (line 15) | def setUpClass(cls):
    method test_search_analyze (line 22) | def test_search_analyze(self):
    method test_search_get_field (line 38) | def test_search_get_field(self):
    method test_search_get_fields (line 53) | def test_search_get_fields(self):
    method test_search_print_field (line 71) | def test_search_print_field(self):
    method test_search_print_fields (line 83) | def test_search_print_fields(self):
    method test_search_print_all (line 95) | def test_search_print_all(self):

FILE: tests/test_similar_methods.py
  class TestSimilarMethods (line 12) | class TestSimilarMethods(unittest.TestCase):
    method setUp (line 15) | def setUp(self):
    method test_similar_analyze (line 23) | def test_similar_analyze(self):
    method test_similar_get_field (line 38) | def test_similar_get_field(self):
    method test_similar_get_fields (line 52) | def test_similar_get_fields(self):
    method test_similar_print_field (line 66) | def test_similar_print_field(self):
    method test_similar_print_fields (line 78) | def test_similar_print_fields(self):
    method test_similar_print_all (line 90) | def test_similar_print_all(self):

FILE: tests/test_suggest_methods.py
  class TestSuggestMethods (line 12) | class TestSuggestMethods(unittest.TestCase):
    method setUp (line 15) | def setUp(self):
    method test_suggest_analyze (line 23) | def test_suggest_analyze(self):
    method test_suggest_nested (line 38) | def test_suggest_nested(self):
    method test_suggest_print_nested (line 52) | def test_suggest_print_nested(self):
    method test_suggest_print_all (line 64) | def test_suggest_print_all(self):
Condensed preview — 81 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (610K chars).
[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "chars": 740,
    "preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: '[BUG] '\nlabels: bug\nassignees: ''\n---\n\n**Describe"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 786,
    "preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: '[FEATURE] '\nlabels: enhancement\nassignees: ''\n"
  },
  {
    "path": ".github/pull_request_template.md",
    "chars": 828,
    "preview": "# Pull Request\n\n## Description\nBrief description of changes made.\n\n## Type of Change\n- [ ] Bug fix (non-breaking change "
  },
  {
    "path": ".github/workflows/docs.yml",
    "chars": 858,
    "preview": "name: Build and Deploy Documentation\n\non:\n  push:\n    branches: [ main ]\n\npermissions:\n  contents: write\n\njobs:\n  docs:\n"
  },
  {
    "path": ".github/workflows/test.yml",
    "chars": 1669,
    "preview": "name: Tests\n\non:\n  push:\n    branches: [ main, develop ]\n  pull_request:\n    branches: [ main ]\n\njobs:\n  test:\n    runs-"
  },
  {
    "path": ".gitignore",
    "chars": 2949,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 11988,
    "preview": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\n## [1.0.6] - 2025-11-16\n\n### Bug Fixe"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "chars": 2461,
    "preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participa"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 749,
    "preview": "# Contributing to GPlay Scraper\n\nThank you for your interest in contributing! \n\n## Development Setup\n\n1. Fork the reposi"
  },
  {
    "path": "LICENSE",
    "chars": 1068,
    "preview": "MIT License\n\nCopyright (c) 2025 Mohammed Cha\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
  },
  {
    "path": "MANIFEST.in",
    "chars": 184,
    "preview": "include README.md\ninclude LICENSE\ninclude requirements.txt\ninclude CHANGELOG.md\ninclude CONTRIBUTING.md\ninclude SECURITY"
  },
  {
    "path": "README/APP_METHODS.md",
    "chars": 10405,
    "preview": "# App Methods\n\nExtract detailed information about individual Google Play Store apps.\n\n## Quick Start\n\n```python\nfrom gpl"
  },
  {
    "path": "README/DEVELOPER_METHODS.md",
    "chars": 7716,
    "preview": "# Developer Methods\n\nGet all apps published by a specific developer on Google Play Store.\n\n## Quick Start\n\n```python\nfro"
  },
  {
    "path": "README/LIST_METHODS.md",
    "chars": 11136,
    "preview": "# List Methods\n\nGet top charts from Google Play Store (top free, top paid, top grossing).\n\n## Quick Start\n\n```python\nfro"
  },
  {
    "path": "README/README.md",
    "chars": 6816,
    "preview": "# GPlay Scraper Documentation\n\nComplete documentation for all 7 method types in GPlay Scraper.\n\n## 📚 Method Documentatio"
  },
  {
    "path": "README/REVIEWS_METHODS.md",
    "chars": 11596,
    "preview": "# Reviews Methods\n\nExtract user reviews from Google Play Store apps with ratings, content, and metadata.\n\n## Quick Start"
  },
  {
    "path": "README/SEARCH_METHODS.md",
    "chars": 10568,
    "preview": "# Search Methods\n\nSearch for apps on Google Play Store by keyword, app name, or category.\n\n## Quick Start\n\n```python\nfro"
  },
  {
    "path": "README/SIMILAR_METHODS.md",
    "chars": 11192,
    "preview": "# Similar Methods\n\nFind similar and related apps on Google Play Store based on a reference app.\n\n## Quick Start\n\n```pyth"
  },
  {
    "path": "README/SUGGEST_METHODS.md",
    "chars": 11002,
    "preview": "# Suggest Methods\n\nGet search suggestions and autocomplete from Google Play Store for keyword discovery and ASO.\n\n## Qui"
  },
  {
    "path": "README.md",
    "chars": 14560,
    "preview": "# Google Play Scraper - Python Library 📱\n\n[![PyPI version](https://badge.fury.io/py/gplay-scraper.svg)](https://badge.fu"
  },
  {
    "path": "SECURITY.md",
    "chars": 410,
    "preview": "# Security Policy\n\n## Supported Versions\n\n| Version | Supported          |\n| ------- | ------------------ |\n| 1.0.x   | "
  },
  {
    "path": "build_docs.py",
    "chars": 1942,
    "preview": "#!/usr/bin/env python3\n\"\"\"Build documentation using Sphinx.\"\"\"\n\nimport os\nimport sys\nimport subprocess\nfrom pathlib impo"
  },
  {
    "path": "docs/README.md",
    "chars": 384,
    "preview": "# GPlay Scraper Documentation\n\n## Build Documentation\n\n```bash\npip install -r requirements.txt\ncd docs\nsphinx-build -b h"
  },
  {
    "path": "docs/api/app.rst",
    "chars": 9235,
    "preview": "App Methods\n===========\n\nExtract comprehensive app data with 57 fields including install analytics, ratings, pricing, an"
  },
  {
    "path": "docs/api/developer.rst",
    "chars": 3033,
    "preview": "Developer Methods\n=================\n\nGet all apps from a specific developer or company.\n\nOverview\n--------\n\nThe Develope"
  },
  {
    "path": "docs/api/list.rst",
    "chars": 3207,
    "preview": "List Methods\n============\n\nGet top charts (top free, top paid, top grossing apps).\n\nOverview\n--------\n\nThe List methods "
  },
  {
    "path": "docs/api/reviews.rst",
    "chars": 5036,
    "preview": "Reviews Methods\n===============\n\nExtract user reviews and ratings with sorting options.\n\nOverview\n--------\n\nThe Reviews "
  },
  {
    "path": "docs/api/search.rst",
    "chars": 6397,
    "preview": "Search Methods\n==============\n\nSearch for apps on Google Play Store by keyword and get results with 11 fields per app.\n\n"
  },
  {
    "path": "docs/api/similar.rst",
    "chars": 1066,
    "preview": "Similar Methods\n===============\n\nFind apps similar to a given app (competitors or alternatives).\n\nOverview\n--------\n\nThe"
  },
  {
    "path": "docs/api/suggest.rst",
    "chars": 1716,
    "preview": "Suggest Methods\n===============\n\nGet search suggestions and autocomplete.\n\nOverview\n--------\n\nThe Suggest methods provid"
  },
  {
    "path": "docs/conf.py",
    "chars": 1795,
    "preview": "project = 'GPlay Scraper'\ncopyright = '2025, GPlay Scraper'\nauthor = 'GPlay Scraper'\nrelease = '1.0.5'\n\nextensions = [\n "
  },
  {
    "path": "docs/configuration.rst",
    "chars": 5625,
    "preview": "Configuration\n=============\n\nAdvanced configuration options for GPlay Scraper.\n\nHTTP Client Selection\n------------------"
  },
  {
    "path": "docs/error_handling.rst",
    "chars": 6151,
    "preview": "Error Handling\n==============\n\nGuide to handling errors and exceptions in GPlay Scraper.\n\nException Types\n--------------"
  },
  {
    "path": "docs/examples.rst",
    "chars": 7024,
    "preview": "Examples\n========\n\nPractical examples of using GPlay Scraper for common tasks.\n\nApp Analytics Dashboard\n----------------"
  },
  {
    "path": "docs/fields.rst",
    "chars": 5851,
    "preview": "Field Reference\n===============\n\nComplete reference of all 112 fields returned by GPlay Scraper.\n\nApp Fields (57 Fields)"
  },
  {
    "path": "docs/index.rst",
    "chars": 1343,
    "preview": "GPlay Scraper Documentation\n============================\n\nA comprehensive Python library for scraping Google Play Store "
  },
  {
    "path": "docs/installation.rst",
    "chars": 2044,
    "preview": "Installation\n============\n\nRequirements\n------------\n\n* Python 3.7 or higher\n* pip package manager\n\nBasic Installation\n-"
  },
  {
    "path": "docs/quickstart.rst",
    "chars": 6192,
    "preview": "Quick Start Guide\n=================\n\nThis guide will get you started with GPlay Scraper in 5 minutes.\n\nBasic Usage\n-----"
  },
  {
    "path": "docs/requirements.txt",
    "chars": 92,
    "preview": "sphinx>=7.0.0\nsphinx-book-theme>=1.0.0\nsphinx-copybutton>=0.5.0\nsphinx-autobuild>=2021.3.14\n"
  },
  {
    "path": "examples/README.md",
    "chars": 3207,
    "preview": "# Examples\n\nThis folder contains example scripts demonstrating all methods for each of the 7 method types.\n\n## Files\n\n##"
  },
  {
    "path": "examples/app_methods_example.py",
    "chars": 1657,
    "preview": "\"\"\"\nApp Methods Example\nDemonstrates all 6 app methods for extracting app details\n\nParameters:\n- app_id: App package nam"
  },
  {
    "path": "examples/developer_methods_example.py",
    "chars": 2035,
    "preview": "\"\"\"\nDeveloper Methods Example\nDemonstrates all 6 developer methods for getting developer's apps\n\nParameters:\n- dev_id: D"
  },
  {
    "path": "examples/list_methods_example.py",
    "chars": 2293,
    "preview": "\"\"\"\nList Methods Example\nDemonstrates all 6 list methods for getting top charts\n\nParameters:\n- collection: Chart type - "
  },
  {
    "path": "examples/reviews_methods_example.py",
    "chars": 2238,
    "preview": "\"\"\"\nReviews Methods Example\nDemonstrates all 6 reviews methods for extracting user reviews\n\nParameters:\n- app_id: App pa"
  },
  {
    "path": "examples/search_methods_example.py",
    "chars": 1928,
    "preview": "\"\"\"\nSearch Methods Example\nDemonstrates all 6 search methods for finding apps\n\nParameters:\n- query: Search keyword\n- cou"
  },
  {
    "path": "examples/similar_methods_example.py",
    "chars": 1996,
    "preview": "\"\"\"\nSimilar Methods Example\nDemonstrates all 6 similar methods for finding related apps\n\nParameters:\n- app_id: App packa"
  },
  {
    "path": "examples/suggest_methods_example.py",
    "chars": 1406,
    "preview": "\"\"\"\nSuggest Methods Example\nDemonstrates all 4 suggest methods for getting search suggestions\n\nParameters:\n- term: Searc"
  },
  {
    "path": "gplay_scraper/__init__.py",
    "chars": 1268,
    "preview": "\"\"\"GPlay Scraper - Google Play Store scraping library.\n\nThis package provides comprehensive tools for scraping Google Pl"
  },
  {
    "path": "gplay_scraper/app.py",
    "chars": 26164,
    "preview": "\"\"\"Main GPlayScraper class that provides unified access to all scraping methods.\n\nThis module contains the main GPlayScr"
  },
  {
    "path": "gplay_scraper/config.py",
    "chars": 5912,
    "preview": "\"\"\"Configuration module for GPlay Scraper.\n\nContains all constants, default values, URLs, and error messages.\n\"\"\"\n\nimpor"
  },
  {
    "path": "gplay_scraper/core/__init__.py",
    "chars": 346,
    "preview": "\"\"\"Core module containing all 7 method classes for Google Play Store scraping.\"\"\"\n\nfrom .gplay_methods import AppMethods"
  },
  {
    "path": "gplay_scraper/core/gplay_methods.py",
    "chars": 36989,
    "preview": "\"\"\"Method classes for all 7 scraping types.\n\nThis module contains 7 method classes, each providing 6 functions (except S"
  },
  {
    "path": "gplay_scraper/core/gplay_parser.py",
    "chars": 28123,
    "preview": "\"\"\"Parser classes for extracting and formatting data from raw responses.\n\nThis module contains 7 parser classes that han"
  },
  {
    "path": "gplay_scraper/core/gplay_scraper.py",
    "chars": 23177,
    "preview": "import json\nimport re\nimport logging\nfrom typing import Dict\nfrom ..utils.http_client import HttpClient\nfrom ..config im"
  },
  {
    "path": "gplay_scraper/exceptions.py",
    "chars": 803,
    "preview": "\"\"\"Custom exceptions for GPlay Scraper.\n\nThis module defines all custom exceptions used throughout the library.\n\"\"\"\n\n\ncl"
  },
  {
    "path": "gplay_scraper/models/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "gplay_scraper/models/element_specs.py",
    "chars": 21354,
    "preview": "\"\"\"Element specifications for data extraction from Google Play Store.\n\nThis module defines ElementSpec class and Element"
  },
  {
    "path": "gplay_scraper/utils/__init__.py",
    "chars": 269,
    "preview": "from .helpers import *\n\n__all__ = [\n    'nested_lookup', 'unescape_text', 'extract_categories', 'get_categories',\n    'p"
  },
  {
    "path": "gplay_scraper/utils/constants.py",
    "chars": 20556,
    "preview": "# Review sort options\nSORT_NAMES = {\n    'RELEVANT': 1,  # Most relevant reviews\n    'NEWEST': 2,    # Newest reviews fi"
  },
  {
    "path": "gplay_scraper/utils/error_handling.py",
    "chars": 11626,
    "preview": "\"\"\"Unified error handling decorators for all gplay_scraper methods.\"\"\"\n\nimport time\nimport logging\nimport json\nfrom func"
  },
  {
    "path": "gplay_scraper/utils/helpers.py",
    "chars": 16171,
    "preview": "\"\"\"Helper functions for data processing and manipulation.\n\nThis module contains utility functions for:\n- Text unescaping"
  },
  {
    "path": "gplay_scraper/utils/http_client.py",
    "chars": 36876,
    "preview": "\"\"\"HTTP client with support for 7 different libraries and automatic fallback.\n\nThis module provides a unified HTTP clien"
  },
  {
    "path": "output/app_example.json",
    "chars": 6337,
    "preview": "{\n  \"appId\": \"com.playdead.limbo.full\",\n  \"title\": \"LIMBO\",\n  \"summary\": \"Uncertain of his sister's fate, a boy enters L"
  },
  {
    "path": "output/developer_example.json",
    "chars": 22193,
    "preview": "[\n  {\n    \"appId\": \"com.google.android.apps.bard\",\n    \"title\": \"Google Gemini\",\n    \"description\": \"Supercharge your cr"
  },
  {
    "path": "output/list_example.json",
    "chars": 50193,
    "preview": "[\n  {\n    \"appId\": \"com.block.juggle\",\n    \"title\": \"Block Blast!\",\n    \"description\": \"Enter the world of Block Blast, "
  },
  {
    "path": "output/reviews_example.json",
    "chars": 4210,
    "preview": "[\n  {\n    \"reviewId\": \"89d0170a-6e0a-4fd3-8f9c-b52ed975b701\",\n    \"userName\": \"Athul L Kumar\",\n    \"userImage\": \"https:/"
  },
  {
    "path": "output/search_example.json",
    "chars": 4699,
    "preview": "[\n  {\n    \"appId\": \"com.instagram.android\",\n    \"title\": \"Instagram\",\n    \"description\": \"Create & share photos, stories"
  },
  {
    "path": "output/similar_example.json",
    "chars": 11209,
    "preview": "[\n  {\n    \"appId\": \"com.playdigious.littlenightmare\",\n    \"title\": \"Little Nightmares\",\n    \"description\": \"First availa"
  },
  {
    "path": "output/suggest_example.json",
    "chars": 1062,
    "preview": "{\n    \"photo editor\": [\n        \"photo editor\",\n        \"photo editor free\",\n        \"photo editor app\",\n        \"photo "
  },
  {
    "path": "requirements.txt",
    "chars": 293,
    "preview": "# Core dependencies\nrequests>=2.25.0\nbeautifulsoup4>=4.9.0\n\n# HTTP client libraries\ncurl-cffi>=0.5.0\ntls-client>=1.0.0\nu"
  },
  {
    "path": "setup.py",
    "chars": 4615,
    "preview": "from setuptools import setup, find_packages\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n    long_description ="
  },
  {
    "path": "tests/__init__.py",
    "chars": 15,
    "preview": "# Tests package"
  },
  {
    "path": "tests/test_app_methods.py",
    "chars": 5138,
    "preview": "import unittest\nimport warnings\nimport time\nfrom gplay_scraper import GPlayScraper\nfrom gplay_scraper.exceptions import "
  },
  {
    "path": "tests/test_basic.py",
    "chars": 4247,
    "preview": "import unittest\nimport sys\nimport os\n\n# Add the parent directory to the path to import gplay_scraper\nsys.path.insert(0, "
  },
  {
    "path": "tests/test_developer_methods.py",
    "chars": 5097,
    "preview": "\"\"\"\nUnit tests for Developer Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScr"
  },
  {
    "path": "tests/test_list_methods.py",
    "chars": 5130,
    "preview": "\"\"\"\nUnit tests for List Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScraper\n"
  },
  {
    "path": "tests/test_package.py",
    "chars": 3099,
    "preview": "#!/usr/bin/env python3\n\"\"\"\nSimple test script to verify gplay-scraper package functionality\nThis script tests basic impo"
  },
  {
    "path": "tests/test_reviews_methods.py",
    "chars": 5344,
    "preview": "\"\"\"\nUnit tests for Reviews Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScrap"
  },
  {
    "path": "tests/test_search_methods.py",
    "chars": 5303,
    "preview": "\"\"\"\nUnit tests for Search Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScrape"
  },
  {
    "path": "tests/test_similar_methods.py",
    "chars": 5064,
    "preview": "\"\"\"\nUnit tests for Similar Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScrap"
  },
  {
    "path": "tests/test_suggest_methods.py",
    "chars": 3539,
    "preview": "\"\"\"\nUnit tests for Suggest Methods\n\"\"\"\n\nimport unittest\nimport time\nimport warnings\nfrom gplay_scraper import GPlayScrap"
  }
]

About this extraction

This page contains the full source code of the Mohammedcha/gplay-scraper GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 81 files (567.4 KB), approximately 153.3k tokens, and a symbol index with 276 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!