[
  {
    "path": ".github/workflows/publication.yml",
    "content": "name: Publication\non:\n  release:\n    types: [created]\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v3\n    - uses: actions/setup-python@v4\n      with:\n        python-version: '3.8'\n    - run: python -m pip install --upgrade tox-gh-actions\n    - env:\n        TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}\n        TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}\n      run: tox -e publish -- upload\n"
  },
  {
    "path": ".github/workflows/validation.yml",
    "content": "name: Validation\non: [push, pull_request]\njobs:\n  build:\n    runs-on: ubuntu-20.04\n    strategy:\n      matrix:\n        python-version: [\"3.8\", \"3.9\", \"3.10\", \"3.11\", \"3.12\", \"3.13\"]\n    steps:\n    - uses: actions/checkout@v3\n    - uses: actions/setup-python@v4\n      with:\n        python-version: ${{ matrix.python-version }}\n    - run: python -m pip install --upgrade tox-gh-actions\n    - run: python -m tox\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# celery beat schedule file\ncelerybeat-schedule\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n"
  },
  {
    "path": "CONTRIBUTING.rst",
    "content": "Contributing\n============\n\nContributions are very welcome. Tests can be run with `tox <https://tox.readthedocs.io/en/latest/>`_.\nPlease ensure the coverage at least stays the same before you submit a pull request.\n\nDevelopment Environment Setup\n-----------------------------\n\nHere's how to install pytest-mypy in development mode so you can test your changes locally:\n\n.. code-block:: bash\n\n    tox --devenv venv\n    venv/bin/pytest --mypy test_example.py\n\nHow to publish a new version to PyPI\n------------------------------------\n\nPush a tag, and the release will be published automatically.\nTo publish manually:\n\n.. code-block:: bash\n\n    tox -e publish -- upload\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2016 Daniel Bader\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "README.rst",
    "content": "pytest-mypy\n===================================\n\nMypy static type checker plugin for pytest\n\n.. image:: https://img.shields.io/pypi/v/pytest-mypy.svg\n   :target: https://pypi.org/project/pytest-mypy/\n   :alt: See Latest Release on PyPI\n\nFeatures\n--------\n\n* Runs the mypy static type checker on your source files as part of your pytest test runs.\n* Does for `mypy`_ what the `pytest-flake8`_ plugin does for `flake8`_.\n* This is a work in progress – pull requests appreciated.\n\n\nInstallation\n------------\n\nYou can install \"pytest-mypy\" via `pip`_ from `PyPI`_:\n\n.. code-block:: bash\n\n    $ pip install pytest-mypy\n\nUsage\n-----\n\nYou can enable pytest-mypy with the ``--mypy`` flag:\n\n.. code-block:: bash\n\n    $ py.test --mypy test_*.py\n\nMypy supports `reading configuration settings <http://mypy.readthedocs.io/en/latest/config_file.html>`_ from a ``mypy.ini`` file.\nAlternatively, the plugin can be configured in a ``conftest.py`` to invoke mypy with extra options:\n\n.. code-block:: python\n\n    def pytest_configure(config):\n        plugin = config.pluginmanager.getplugin('mypy')\n        plugin.mypy_argv.append('--check-untyped-defs')\n\nYou can restrict your test run to only perform mypy checks and not any other tests by using the `-m` option:\n\n.. code-block:: bash\n\n    py.test --mypy -m mypy test_*.py\n\nLicense\n-------\n\nDistributed under the terms of the `MIT`_ license, \"pytest-mypy\" is free and open source software\n\nIssues\n------\n\nIf you encounter any problems, please `file an issue`_ along with a detailed description.\n\nMeta\n----\n\nDaniel Bader – `@dbader_org`_ – https://dbader.org – mail@dbader.org\n\nhttps://github.com/realpython/pytest-mypy\n\n\n.. _`MIT`: http://opensource.org/licenses/MIT\n.. _`file an issue`: https://github.com/realpython/pytest-mypy/issues\n.. _`pip`: https://pypi.python.org/pypi/pip/\n.. _`PyPI`: https://pypi.python.org/pypi\n.. _`mypy`: http://mypy-lang.org/\n.. _`pytest-flake8`: https://pypi.python.org/pypi/pytest-flake8\n.. _`flake8`: https://pypi.python.org/pypi/flake8\n.. _`@dbader_org`: https://twitter.com/dbader_org\n"
  },
  {
    "path": "changelog.md",
    "content": "# Changelog\n\nThe Changelog has moved to https://github.com/realpython/pytest-mypy/releases\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools >= 61.0\", \"setuptools-scm >= 7.1\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"pytest-mypy\"\ndynamic = [\"version\"]\ndescription = \"A Pytest Plugin for Mypy\"\nreadme = \"README.rst\"\nlicense = {file = \"LICENSE\"}\nmaintainers = [\n    {name = \"David Tucker\", email = \"david@tucker.name\"}\n]\nclassifiers = [\n    \"Development Status :: 5 - Production/Stable\",\n    \"Framework :: Pytest\",\n    \"Intended Audience :: Developers\",\n    \"License :: OSI Approved :: MIT License\",\n    \"Operating System :: OS Independent\",\n    \"Programming Language :: Python\",\n    \"Programming Language :: Python :: 3\",\n    \"Programming Language :: Python :: 3.8\",\n    \"Programming Language :: Python :: 3.9\",\n    \"Programming Language :: Python :: 3.10\",\n    \"Programming Language :: Python :: 3.11\",\n    \"Programming Language :: Python :: 3.12\",\n    \"Programming Language :: Python :: 3.13\",\n    \"Programming Language :: Python :: Implementation :: CPython\",\n    \"Topic :: Software Development :: Testing\",\n]\nrequires-python = \">=3.8\"\ndependencies = [\n    \"filelock>=3.0\",\n    \"mypy>=1.0\",\n    \"pytest>=7.0\",\n]\n\n[project.entry-points.pytest11]\nmypy = \"pytest_mypy\"\n\n[project.urls]\nhomepage = \"https://github.com/realpython/pytest-mypy\"\n\n[tool.setuptools_scm]\n"
  },
  {
    "path": "src/pytest_mypy/__init__.py",
    "content": "\"\"\"Mypy static type checker plugin for Pytest\"\"\"\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nimport json\nfrom pathlib import Path\nfrom tempfile import NamedTemporaryFile\nimport typing\n\nfrom filelock import FileLock\nimport mypy.api\nimport pytest\n\nif typing.TYPE_CHECKING:  # pragma: no cover\n    from typing import (\n        Any,\n        Dict,\n        IO,\n        Iterator,\n        List,\n        Optional,\n        Tuple,\n        Union,\n    )\n\n    # https://github.com/pytest-dev/pytest/issues/7469\n    from _pytest._code.code import TerminalRepr\n\n    # https://github.com/pytest-dev/pytest/pull/12661\n    from _pytest.terminal import TerminalReporter\n\n    # https://github.com/pytest-dev/pytest-xdist/issues/1121\n    from xdist.workermanage import WorkerController  # type: ignore\n\n\n@dataclass(frozen=True)  # compat python < 3.10 (kw_only=True)\nclass MypyConfigStash:\n    \"\"\"Plugin data stored in the pytest.Config stash.\"\"\"\n\n    mypy_results_path: Path\n\n    @classmethod\n    def from_serialized(cls, serialized: str) -> MypyConfigStash:\n        return cls(mypy_results_path=Path(serialized))\n\n    def serialized(self) -> str:\n        return str(self.mypy_results_path)\n\n\nitem_marker = \"mypy\"\nmypy_argv: List[str] = []\nnodeid_name = \"mypy\"\nstash_key = {\n    \"config\": pytest.StashKey[MypyConfigStash](),\n}\nterminal_summary_title = \"mypy\"\n\n\ndef default_test_name_formatter(*, item: MypyFileItem) -> str:\n    path = item.path.relative_to(item.config.invocation_params.dir)\n    return f\"[{terminal_summary_title}] {path}\"\n\n\ntest_name_formatter = default_test_name_formatter\n\n\ndef default_file_error_formatter(\n    item: MypyItem,\n    results: MypyResults,\n    lines: List[str],\n) -> str:\n    \"\"\"Create a string to be displayed when mypy finds errors in a file.\"\"\"\n    if item.config.option.mypy_report_style == \"mypy\":\n        return \"\\n\".join(lines)\n    return \"\\n\".join(line.partition(\":\")[2].strip() for line in lines)\n\n\nfile_error_formatter = default_file_error_formatter\n\n\ndef pytest_addoption(parser: pytest.Parser) -> None:\n    \"\"\"Add options for enabling and running mypy.\"\"\"\n    group = parser.getgroup(\"mypy\")\n    group.addoption(\"--mypy\", action=\"store_true\", help=\"run mypy on .py files\")\n    group.addoption(\n        \"--mypy-ignore-missing-imports\",\n        action=\"store_true\",\n        help=\"suppresses error messages about imports that cannot be resolved\",\n    )\n    group.addoption(\n        \"--mypy-config-file\",\n        action=\"store\",\n        type=str,\n        help=\"adds custom mypy config file\",\n    )\n    styles = {\n        \"mypy\": \"modify the original mypy output as little as possible\",\n        \"no-path\": \"(default) strip the path prefix from mypy errors\",\n    }\n    group.addoption(\n        \"--mypy-report-style\",\n        choices=list(styles),\n        help=\"change the way mypy output is reported:\\n\"\n        + \"\\n\".join(f\"- {name}: {desc}\" for name, desc in styles.items()),\n    )\n    group.addoption(\n        \"--mypy-no-status-check\",\n        action=\"store_true\",\n        help=\"ignore mypy's exit status\",\n    )\n    group.addoption(\n        \"--mypy-xfail\",\n        action=\"store_true\",\n        help=\"xfail mypy errors\",\n    )\n\n\ndef _xdist_worker(config: pytest.Config) -> Dict[str, Any]:\n    try:\n        return {\"input\": _xdist_workerinput(config)}\n    except AttributeError:\n        return {}\n\n\ndef _xdist_workerinput(node: Union[WorkerController, pytest.Config]) -> Any:\n    try:\n        # mypy complains that pytest.Config does not have this attribute,\n        # but xdist.remote defines it in worker processes.\n        return node.workerinput  # type: ignore[union-attr]\n    except AttributeError:  # compat xdist < 2.0\n        return node.slaveinput  # type: ignore[union-attr]\n\n\nclass MypyXdistControllerPlugin:\n    \"\"\"A plugin that is only registered on xdist controller processes.\"\"\"\n\n    def pytest_configure_node(self, node: WorkerController) -> None:\n        \"\"\"Pass the config stash to workers.\"\"\"\n        _xdist_workerinput(node)[\"mypy_config_stash_serialized\"] = node.config.stash[\n            stash_key[\"config\"]\n        ].serialized()\n\n\ndef pytest_configure(config: pytest.Config) -> None:\n    \"\"\"\n    Initialize the path used to cache mypy results,\n    register a custom marker for MypyItems,\n    and configure the plugin based on the CLI.\n    \"\"\"\n    xdist_worker = _xdist_worker(config)\n    if not xdist_worker:\n        config.pluginmanager.register(MypyControllerPlugin())\n\n        # Get the path to a temporary file and delete it.\n        # The first MypyItem to run will see the file does not exist,\n        # and it will run and parse mypy results to create it.\n        # Subsequent MypyItems will see the file exists,\n        # and they will read the parsed results.\n        with NamedTemporaryFile(delete=True) as tmp_f:\n            config.stash[stash_key[\"config\"]] = MypyConfigStash(\n                mypy_results_path=Path(tmp_f.name),\n            )\n\n        # If xdist is enabled, then the results path should be exposed to\n        # the workers so that they know where to read parsed results from.\n        if config.pluginmanager.getplugin(\"xdist\"):\n            config.pluginmanager.register(MypyXdistControllerPlugin())\n    else:\n        # xdist workers create the stash using input from the controller plugin.\n        config.stash[stash_key[\"config\"]] = MypyConfigStash.from_serialized(\n            xdist_worker[\"input\"][\"mypy_config_stash_serialized\"]\n        )\n\n    config.addinivalue_line(\n        \"markers\",\n        f\"{item_marker}: mark tests to be checked by mypy.\",\n    )\n    if config.getoption(\"--mypy-ignore-missing-imports\"):\n        mypy_argv.append(\"--ignore-missing-imports\")\n\n    mypy_config_file = config.getoption(\"--mypy-config-file\")\n    if mypy_config_file:\n        mypy_argv.append(f\"--config-file={mypy_config_file}\")\n\n    if any(\n        [\n            config.option.mypy,\n            config.option.mypy_config_file,\n            config.option.mypy_report_style,\n            config.option.mypy_ignore_missing_imports,\n            config.option.mypy_no_status_check,\n            config.option.mypy_xfail,\n        ],\n    ):\n        config.pluginmanager.register(MypyCollectionPlugin())\n\n\nclass MypyCollectionPlugin:\n    \"\"\"A Pytest plugin that collects MypyFiles.\"\"\"\n\n    def pytest_collect_file(\n        self,\n        file_path: Path,\n        parent: pytest.Collector,\n    ) -> Optional[MypyFile]:\n        \"\"\"Create a MypyFileItem for every file mypy should run on.\"\"\"\n        if file_path.suffix in {\".py\", \".pyi\"}:\n            # Do not create MypyFile instance for a .py file if a\n            # .pyi file with the same name already exists;\n            # pytest will complain about duplicate modules otherwise\n            if (\n                file_path.suffix == \".pyi\"\n                or not file_path.with_suffix(\".pyi\").is_file()\n            ):\n                return MypyFile.from_parent(parent=parent, path=file_path)\n        return None\n\n\nclass MypyFile(pytest.File):\n    \"\"\"A File that Mypy will run on.\"\"\"\n\n    def collect(self) -> Iterator[MypyItem]:\n        \"\"\"Create a MypyFileItem for the File.\"\"\"\n        yield MypyFileItem.from_parent(parent=self, name=nodeid_name)\n        # Since mypy might check files that were not collected,\n        # pytest could pass even though mypy failed!\n        # To prevent that, add an explicit check for the mypy exit status.\n        if not self.session.config.option.mypy_no_status_check and not any(\n            isinstance(item, MypyStatusItem) for item in self.session.items\n        ):\n            yield MypyStatusItem.from_parent(\n                parent=self,\n                name=nodeid_name + \"-status\",\n            )\n\n\nclass MypyItem(pytest.Item):\n    \"\"\"A Mypy-related test Item.\"\"\"\n\n    def __init__(self, *args: Any, **kwargs: Any):\n        super().__init__(*args, **kwargs)\n        self.add_marker(item_marker)\n\n    def repr_failure(\n        self,\n        excinfo: pytest.ExceptionInfo[BaseException],\n        style: Optional[str] = None,\n    ) -> Union[str, TerminalRepr]:\n        \"\"\"\n        Unwrap mypy errors so we get a clean error message without the\n        full exception repr.\n        \"\"\"\n        if excinfo.errisinstance(MypyError):\n            return str(excinfo.value.args[0])\n        return super().repr_failure(excinfo)\n\n\ndef _error_severity(line: str) -> Optional[str]:\n    components = [component.strip() for component in line.split(\":\", 3)]\n    if len(components) < 2:\n        return None\n    # The second component is either the line or the severity:\n    # demo/note.py:2: note: By default the bodies of untyped functions are not checked\n    # demo/sub/conftest.py: error: Duplicate module named \"conftest\"\n    return components[2] if components[1].isdigit() else components[1]\n\n\nclass MypyFileItem(MypyItem):\n    \"\"\"A check for Mypy errors in a File.\"\"\"\n\n    def runtest(self) -> None:\n        \"\"\"Raise an exception if mypy found errors for this item.\"\"\"\n        results = MypyResults.from_session(self.session)\n        lines = results.path_lines.get(self.path.resolve(), [])\n        if lines and not all(_error_severity(line) == \"note\" for line in lines):\n            if self.session.config.option.mypy_xfail:\n                self.add_marker(\n                    pytest.mark.xfail(\n                        raises=MypyError,\n                        reason=\"mypy errors are expected by --mypy-xfail.\",\n                    )\n                )\n            raise MypyError(file_error_formatter(self, results, lines))\n\n    def reportinfo(self) -> Tuple[Path, None, str]:\n        \"\"\"Produce a heading for the test report.\"\"\"\n        return (self.path, None, test_name_formatter(item=self))\n\n\nclass MypyStatusItem(MypyItem):\n    \"\"\"A check for a non-zero mypy exit status.\"\"\"\n\n    def runtest(self) -> None:\n        \"\"\"Raise a MypyError if mypy exited with a non-zero status.\"\"\"\n        results = MypyResults.from_session(self.session)\n        if results.status:\n            if self.session.config.option.mypy_xfail:\n                self.add_marker(\n                    pytest.mark.xfail(\n                        raises=MypyError,\n                        reason=(\n                            \"A non-zero mypy exit status is expected by --mypy-xfail.\"\n                        ),\n                    )\n                )\n            raise MypyError(f\"mypy exited with status {results.status}.\")\n\n\n@dataclass(frozen=True)  # compat python < 3.10 (kw_only=True)\nclass MypyResults:\n    \"\"\"Parsed results from Mypy.\"\"\"\n\n    _encoding = \"utf-8\"\n\n    opts: List[str]\n    args: List[str]\n    stdout: str\n    stderr: str\n    status: int\n    path_lines: Dict[Optional[Path], List[str]]\n\n    def dump(self, results_f: IO[bytes]) -> None:\n        \"\"\"Cache results in a format that can be parsed by load().\"\"\"\n        prepared = vars(self).copy()\n        prepared[\"path_lines\"] = {\n            str(path or \"\"): lines for path, lines in prepared[\"path_lines\"].items()\n        }\n        results_f.write(json.dumps(prepared).encode(self._encoding))\n\n    @classmethod\n    def load(cls, results_f: IO[bytes]) -> MypyResults:\n        \"\"\"Get results cached by dump().\"\"\"\n        prepared = json.loads(results_f.read().decode(cls._encoding))\n        prepared[\"path_lines\"] = {\n            Path(path) if path else None: lines\n            for path, lines in prepared[\"path_lines\"].items()\n        }\n        return cls(**prepared)\n\n    @classmethod\n    def from_mypy(\n        cls,\n        paths: List[Path],\n        *,\n        opts: Optional[List[str]] = None,\n    ) -> MypyResults:\n        \"\"\"Generate results from mypy.\"\"\"\n\n        if opts is None:\n            opts = mypy_argv[:]\n        args = [str(path) for path in paths]\n\n        stdout, stderr, status = mypy.api.run(opts + args)\n\n        path_lines: Dict[Optional[Path], List[str]] = {\n            path.resolve(): [] for path in paths\n        }\n        path_lines[None] = []\n        for line in stdout.split(\"\\n\"):\n            if not line:\n                continue\n            try:\n                path = Path(line.partition(\":\")[0]).resolve()\n            except OSError:\n                path = None\n            try:\n                lines = path_lines[path]\n            except KeyError:\n                lines = path_lines[None]\n            lines.append(line)\n\n        return cls(\n            opts=opts,\n            args=args,\n            stdout=stdout,\n            stderr=stderr,\n            status=status,\n            path_lines=path_lines,\n        )\n\n    @classmethod\n    def from_session(cls, session: pytest.Session) -> MypyResults:\n        \"\"\"Load (or generate) cached mypy results for a pytest session.\"\"\"\n        mypy_results_path = session.config.stash[stash_key[\"config\"]].mypy_results_path\n        with FileLock(str(mypy_results_path) + \".lock\"):\n            try:\n                with open(mypy_results_path, mode=\"rb\") as results_f:\n                    results = cls.load(results_f)\n            except FileNotFoundError:\n                cwd = Path.cwd()\n                results = cls.from_mypy(\n                    [\n                        item.path.relative_to(cwd)\n                        for item in session.items\n                        if isinstance(item, MypyFileItem)\n                    ],\n                )\n                with open(mypy_results_path, mode=\"wb\") as results_f:\n                    results.dump(results_f)\n        return results\n\n\nclass MypyError(Exception):\n    \"\"\"\n    An error caught by mypy, e.g a type checker violation\n    or a syntax error.\n    \"\"\"\n\n\nclass MypyControllerPlugin:\n    \"\"\"A plugin that is not registered on xdist worker processes.\"\"\"\n\n    def pytest_terminal_summary(\n        self,\n        terminalreporter: TerminalReporter,\n        config: pytest.Config,\n    ) -> None:\n        \"\"\"Report mypy results.\"\"\"\n        mypy_results_path = config.stash[stash_key[\"config\"]].mypy_results_path\n        try:\n            with open(mypy_results_path, mode=\"rb\") as results_f:\n                results = MypyResults.load(results_f)\n        except FileNotFoundError:\n            # No MypyItems executed.\n            return\n        if not results.stdout and not results.stderr:\n            return\n        terminalreporter.section(terminal_summary_title)\n        if results.stdout:\n            if config.option.mypy_xfail:\n                terminalreporter.write(results.stdout)\n            else:\n                for note in (\n                    unreported_note\n                    for path, lines in results.path_lines.items()\n                    if path is not None\n                    if all(_error_severity(line) == \"note\" for line in lines)\n                    for unreported_note in lines\n                ):\n                    terminalreporter.write_line(note)\n                if results.path_lines.get(None):\n                    color = {\"red\": True} if results.status else {\"green\": True}\n                    terminalreporter.write_line(\n                        \"\\n\".join(results.path_lines[None]), **color\n                    )\n        if results.stderr:\n            terminalreporter.write_line(results.stderr, yellow=True)\n\n    def pytest_unconfigure(self, config: pytest.Config) -> None:\n        \"\"\"Clean up the mypy results path.\"\"\"\n        config.stash[stash_key[\"config\"]].mypy_results_path.unlink(missing_ok=True)\n"
  },
  {
    "path": "src/pytest_mypy/py.typed",
    "content": ""
  },
  {
    "path": "tests/conftest.py",
    "content": "import mypy.version\n\npytest_plugins = \"pytester\"\n\n\ndef pytest_report_header():\n    return f\"mypy: {mypy.version.__version__}\"\n"
  },
  {
    "path": "tests/test_pytest_mypy.py",
    "content": "import signal\nimport sys\nimport textwrap\n\nimport mypy.version\nfrom packaging.version import Version\nimport pytest\n\nimport pytest_mypy\n\n\nMYPY_VERSION = Version(mypy.version.__version__)\nPYTEST_VERSION = Version(pytest.__version__)\nPYTHON_VERSION = Version(\n    \".\".join(\n        str(token)\n        for token in [\n            sys.version_info.major,\n            sys.version_info.minor,\n            sys.version_info.micro,\n        ]\n    )\n)\n\n\n@pytest.fixture(\n    params=[\n        True,  # xdist enabled, active\n        False,  # xdist enabled, inactive\n        None,  # xdist disabled\n    ],\n)\ndef xdist_args(request):\n    if request.param is None:\n        return [\"-p\", \"no:xdist\"]\n    return [\"-n\", \"auto\"] if request.param else []\n\n\n@pytest.mark.parametrize(\"pyfile_count\", [1, 2])\ndef test_mypy_success(testdir, pyfile_count, xdist_args):\n    \"\"\"Verify that running on a module with no type errors passes.\"\"\"\n    testdir.makepyfile(\n        **{\n            \"pyfile_{0}\".format(\n                pyfile_i,\n            ): \"\"\"\n                def pyfunc(x: int) -> int:\n                    return x * 2\n            \"\"\"\n            for pyfile_i in range(pyfile_count)\n        },\n    )\n    result = testdir.runpytest_subprocess(*xdist_args)\n    result.assert_outcomes()\n    assert result.ret == pytest.ExitCode.NO_TESTS_COLLECTED\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = pyfile_count\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\n@pytest.mark.skipif(\n    PYTEST_VERSION < Version(\"7.4\"),\n    reason=\"https://github.com/pytest-dev/pytest/pull/10935\",\n)\n@pytest.mark.skipif(\n    PYTHON_VERSION < Version(\"3.10\"),\n    reason=\"PEP 597 was added in Python 3.10.\",\n)\n@pytest.mark.skipif(\n    PYTHON_VERSION >= Version(\"3.12\") and MYPY_VERSION < Version(\"1.5\"),\n    reason=\"https://github.com/python/mypy/pull/15558\",\n)\ndef test_mypy_encoding_warnings(testdir, monkeypatch):\n    \"\"\"Ensure no warnings are detected by PYTHONWARNDEFAULTENCODING.\"\"\"\n    testdir.makepyfile(\"\")\n    monkeypatch.setenv(\"PYTHONWARNDEFAULTENCODING\", \"1\")\n    result = testdir.runpytest_subprocess(\"--mypy\")\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    expected_warnings = 2  # https://github.com/python/mypy/issues/14603\n    result.assert_outcomes(passed=mypy_checks, warnings=expected_warnings)\n\n\ndef test_mypy_pyi(testdir, xdist_args):\n    \"\"\"\n    Verify that a .py file will be skipped if\n    a .pyi file exists with the same filename.\n    \"\"\"\n    # The incorrect signature below should be ignored\n    # as the .pyi file takes priority\n    testdir.makepyfile(\n        pyfile=\"\"\"\n            def pyfunc(x: int) -> str:\n                return x * 2\n        \"\"\",\n    )\n\n    testdir.makefile(\n        \".pyi\",\n        pyfile=\"\"\"\n            def pyfunc(x: int) -> int: ...\n        \"\"\",\n    )\n\n    result = testdir.runpytest_subprocess(*xdist_args)\n    result.assert_outcomes()\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_error(testdir, xdist_args):\n    \"\"\"Verify that running on a module with type errors fails.\"\"\"\n    testdir.makepyfile(\n        \"\"\"\n            def pyfunc(x: int) -> str:\n                return x * 2\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(*xdist_args)\n    result.assert_outcomes()\n    assert \"_mypy_results_path\" not in result.stderr.str()\n    assert result.ret == pytest.ExitCode.NO_TESTS_COLLECTED\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(failed=mypy_checks)\n    result.stdout.fnmatch_lines([\"2: error: Incompatible return value*\"])\n    assert \"_mypy_results_path\" not in result.stderr.str()\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\ndef test_mypy_path_error(testdir, xdist_args):\n    \"\"\"Verify that runs are not affected by path errors.\"\"\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n\n                class FakePath:\n                    def __init__(self, _):\n                        pass\n                    def resolve(self):\n                        raise OSError\n\n                Path = plugin.Path\n                plugin.Path = FakePath\n                plugin.MypyResults.from_mypy([], opts=['--version'])\n                plugin.Path = Path\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_annotation_unchecked(testdir, xdist_args, tmp_path):\n    \"\"\"Verify that annotation-unchecked warnings do not manifest as an error.\"\"\"\n    testdir.makepyfile(\n        \"\"\"\n            def pyfunc(x):\n                y: int = 2\n                return x * y\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(*xdist_args)\n    result.assert_outcomes()\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    outcomes = {\"passed\": mypy_checks}\n    result.assert_outcomes(**outcomes)\n    result.stdout.fnmatch_lines(\n        [\"*:2: note: By default the bodies of untyped functions are not checked*\"]\n    )\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_ignore_missings_imports(testdir, xdist_args):\n    \"\"\"\n    Verify that --mypy-ignore-missing-imports\n    causes mypy to ignore missing imports.\n    \"\"\"\n    module_name = \"is_always_missing\"\n    testdir.makepyfile(\n        \"\"\"\n            try:\n                import {module_name}\n            except ImportError:\n                pass\n        \"\"\".format(\n            module_name=module_name,\n        ),\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(failed=mypy_checks)\n    result.stdout.fnmatch_lines(\n        [\n            \"2: error: Cannot find *module named *{module_name}*\".format(\n                module_name=module_name,\n            ),\n        ],\n    )\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n    result = testdir.runpytest_subprocess(\"--mypy-ignore-missing-imports\", *xdist_args)\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_config_file(testdir, xdist_args):\n    \"\"\"Verify that --mypy-config-file works.\"\"\"\n    testdir.makepyfile(\n        \"\"\"\n            def pyfunc(x):\n                return x * 2\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n    mypy_config_file = testdir.makeini(\n        \"\"\"\n            [mypy]\n            disallow_untyped_defs = True\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\n        \"--mypy-config-file\",\n        mypy_config_file,\n        *xdist_args,\n    )\n    result.assert_outcomes(failed=mypy_checks)\n\n\ndef test_mypy_marker(testdir, xdist_args):\n    \"\"\"Verify that -m mypy only runs the mypy tests.\"\"\"\n    testdir.makepyfile(\n        \"\"\"\n            def test_fails():\n                assert False\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    test_count = 1\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(failed=test_count, passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n    result = testdir.runpytest_subprocess(\"--mypy\", \"-m\", \"mypy\", *xdist_args)\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_non_mypy_error(testdir, xdist_args):\n    \"\"\"Verify that non-MypyError exceptions are passed through the plugin.\"\"\"\n    message = \"This is not a MypyError.\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n\n                class PatchedMypyFileItem(plugin.MypyFileItem):\n                    def runtest(self):\n                        raise Exception('{message}')\n\n                plugin.MypyFileItem = PatchedMypyFileItem\n        \"\"\".format(\n            message=message,\n        ),\n    )\n    result = testdir.runpytest_subprocess(*xdist_args)\n    result.assert_outcomes()\n    assert result.ret == pytest.ExitCode.NO_TESTS_COLLECTED\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1  # conftest.py\n    mypy_status_check = 1\n    result.assert_outcomes(\n        failed=mypy_file_checks,  # patched to raise an Exception\n        passed=mypy_status_check,  # conftest.py has no type errors.\n    )\n    result.stdout.fnmatch_lines([\"*\" + message])\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\ndef test_mypy_stderr(testdir, xdist_args):\n    \"\"\"Verify that stderr from mypy is printed.\"\"\"\n    stderr = \"This is stderr from mypy.\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            import mypy.api\n\n            def _patched_run(*args, **kwargs):\n                return '', '{stderr}', 1\n\n            mypy.api.run = _patched_run\n        \"\"\".format(\n            stderr=stderr,\n        ),\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([stderr])\n\n\ndef test_mypy_unmatched_stdout(testdir, xdist_args):\n    \"\"\"Verify that unexpected output on stdout from mypy is printed.\"\"\"\n    stdout = \"This is unexpected output on stdout from mypy.\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            import mypy.api\n\n            def _patched_run(*args, **kwargs):\n                return '{stdout}', '', 1\n\n            mypy.api.run = _patched_run\n        \"\"\".format(\n            stdout=stdout,\n        ),\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([stdout])\n\n\ndef test_api_mypy_argv(testdir, xdist_args):\n    \"\"\"Ensure that the plugin can be configured in a conftest.py.\"\"\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n                plugin.mypy_argv.append('--version')\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_api_nodeid_name(testdir, xdist_args):\n    \"\"\"Ensure that the plugin can be configured in a conftest.py.\"\"\"\n    nodeid_name = \"UnmistakableNodeIDName\"\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n                plugin.nodeid_name = '{}'\n        \"\"\".format(\n            nodeid_name,\n        ),\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", \"--verbose\", *xdist_args)\n    result.stdout.fnmatch_lines([\"*conftest.py::\" + nodeid_name + \"*\"])\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_api_test_name_formatter(testdir, xdist_args):\n    \"\"\"Ensure that the test_name_formatter can be replaced in a conftest.py.\"\"\"\n    test_name = \"UnmistakableTestName\"\n    testdir.makepyfile(\n        conftest=f\"\"\"\n            cause_a_mypy_error: str = 5\n\n            def custom_test_name_formatter(item):\n                return \"{test_name}\"\n\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n                plugin.test_name_formatter = custom_test_name_formatter\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([f\"*{test_name}*\"])\n    mypy_file_check = 1\n    mypy_status_check = 1\n    result.assert_outcomes(failed=mypy_file_check + mypy_status_check)\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\n@pytest.mark.xfail(\n    Version(\"0.971\") <= MYPY_VERSION,\n    raises=AssertionError,\n    reason=\"https://github.com/python/mypy/issues/13701\",\n)\n@pytest.mark.parametrize(\n    \"module_name\",\n    [\n        \"__init__\",\n        \"good\",\n    ],\n)\ndef test_mypy_indirect(testdir, xdist_args, module_name):\n    \"\"\"Verify that uncollected files checked by mypy cause a failure.\"\"\"\n    testdir.makepyfile(\n        bad=\"\"\"\n            def pyfunc(x: int) -> str:\n                return x * 2\n        \"\"\",\n    )\n    pyfile = testdir.makepyfile(\n        **{\n            module_name: \"\"\"\n                import bad\n            \"\"\",\n        },\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args, str(pyfile))\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    result.assert_outcomes(passed=mypy_file_checks, failed=mypy_status_check)\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\ndef test_api_file_error_formatter(testdir, xdist_args):\n    \"\"\"Ensure that the file_error_formatter can be replaced in a conftest.py.\"\"\"\n    testdir.makepyfile(\n        bad=\"\"\"\n            def pyfunc(x: int) -> str:\n                return x * 2\n        \"\"\",\n    )\n    file_error = \"UnmistakableFileError\"\n    testdir.makepyfile(\n        conftest=f\"\"\"\n            def custom_file_error_formatter(item, results, lines):\n                return '{file_error}'\n\n            def pytest_configure(config):\n                plugin = config.pluginmanager.getplugin('mypy')\n                plugin.file_error_formatter = custom_file_error_formatter\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([f\"*{file_error}*\"])\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\ndef test_pyproject_toml(testdir, xdist_args):\n    \"\"\"Ensure that the plugin allows configuration with pyproject.toml.\"\"\"\n    testdir.makefile(\n        \".toml\",\n        pyproject=\"\"\"\n            [tool.mypy]\n            disallow_untyped_defs = true\n        \"\"\",\n    )\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pyfunc(x):\n                return x * 2\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([\"1: error: Function is missing a type annotation*\"])\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\ndef test_setup_cfg(testdir, xdist_args):\n    \"\"\"Ensure that the plugin allows configuration with setup.cfg.\"\"\"\n    testdir.makefile(\n        \".cfg\",\n        setup=\"\"\"\n            [mypy]\n            disallow_untyped_defs = True\n        \"\"\",\n    )\n    testdir.makepyfile(\n        conftest=\"\"\"\n            def pyfunc(x):\n                return x * 2\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    result.stdout.fnmatch_lines([\"1: error: Function is missing a type annotation*\"])\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n\n\n@pytest.mark.parametrize(\"module_name\", [\"__init__\", \"test_demo\"])\ndef test_looponfail(testdir, module_name):\n    \"\"\"Ensure that the plugin works with --looponfail.\"\"\"\n\n    pass_source = textwrap.dedent(\n        \"\"\"\\\n        def pyfunc(x: int) -> int:\n            return x * 2\n        \"\"\",\n    )\n    fail_source = textwrap.dedent(\n        \"\"\"\\\n        def pyfunc(x: int) -> str:\n            return x * 2\n        \"\"\",\n    )\n    pyfile = testdir.makepyfile(**{module_name: fail_source})\n    looponfailroot = testdir.mkdir(\"looponfailroot\")\n    looponfailroot_pyfile = looponfailroot.join(pyfile.basename)\n    pyfile.move(looponfailroot_pyfile)\n    pyfile = looponfailroot_pyfile\n    testdir.makeini(\n        textwrap.dedent(\n            \"\"\"\\\n            [pytest]\n            looponfailroots = {looponfailroots}\n            \"\"\".format(\n                looponfailroots=looponfailroot,\n            ),\n        ),\n    )\n\n    child = testdir.spawn_pytest(\n        \"--mypy --looponfail \" + str(pyfile),\n        expect_timeout=60.0,\n    )\n\n    def _expect_session():\n        child.expect(\"==== test session starts ====\")\n\n    def _expect_failure():\n        _expect_session()\n        child.expect(\"==== FAILURES ====\")\n        child.expect(pyfile.basename + \" ____\")\n        child.expect(\"2: error: Incompatible return value\")\n        child.expect(\"==== mypy ====\")\n        child.expect(\"Found 1 error in 1 file (checked 1 source file)\")\n        child.expect(\"2 failed\")\n        child.expect(\"#### LOOPONFAILING ####\")\n        _expect_waiting()\n\n    def _expect_waiting():\n        child.expect(\"#### waiting for changes ####\")\n        child.expect(\"Watching\")\n\n    def _fix():\n        pyfile.write(pass_source)\n        _expect_changed()\n        _expect_success()\n\n    def _expect_changed():\n        child.expect(\"MODIFIED \" + str(pyfile))\n\n    def _expect_success():\n        for _ in range(2):\n            _expect_session()\n            child.expect(\"==== mypy ====\")\n            child.expect(\"Success: no issues found in 1 source file\")\n            child.expect(\"2 passed\")\n        _expect_waiting()\n\n    def _break():\n        pyfile.write(fail_source)\n        _expect_changed()\n        _expect_failure()\n\n    _expect_failure()\n    _fix()\n    _break()\n    _fix()\n    child.kill(signal.SIGTERM)\n\n\ndef test_mypy_results_from_mypy_with_opts():\n    \"\"\"MypyResults.from_mypy respects passed options.\"\"\"\n    mypy_results = pytest_mypy.MypyResults.from_mypy([], opts=[\"--version\"])\n    assert mypy_results.status == 0\n    assert str(MYPY_VERSION) in mypy_results.stdout\n\n\ndef test_mypy_no_output(testdir, xdist_args):\n    \"\"\"No terminal summary is shown if there is no output from mypy.\"\"\"\n    testdir.makepyfile(\n        # Mypy prints a success message to stderr by default:\n        # \"Success: no issues found in 1 source file\"\n        # Clear stderr and unmatched_stdout to simulate mypy having no output:\n        conftest=\"\"\"\n            import pytest\n\n            @pytest.hookimpl(trylast=True)\n            def pytest_configure(config):\n                pytest_mypy = config.pluginmanager.getplugin(\"mypy\")\n                mypy_config_stash = config.stash[pytest_mypy.stash_key[\"config\"]]\n                with open(mypy_config_stash.mypy_results_path, mode=\"wb\") as results_f:\n                    pytest_mypy.MypyResults(\n                        opts=[],\n                        args=[],\n                        stdout=\"\",\n                        stderr=\"\",\n                        status=0,\n                        path_lines={},\n                    ).dump(results_f)\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(passed=mypy_checks)\n    assert result.ret == pytest.ExitCode.OK\n    assert f\"= {pytest_mypy.terminal_summary_title} =\" not in str(result.stdout)\n\n\ndef test_py_typed(testdir):\n    \"\"\"Mypy recognizes that pytest_mypy is typed.\"\"\"\n    name = \"typed\"\n    testdir.makepyfile(**{name: \"import pytest_mypy\"})\n    result = testdir.run(\"mypy\", f\"{name}.py\")\n    assert result.ret == 0\n\n\ndef test_mypy_no_status_check(testdir, xdist_args):\n    \"\"\"Verify that --mypy-no-status-check disables MypyStatusItem collection.\"\"\"\n    testdir.makepyfile(\"one: int = 1\")\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    result.assert_outcomes(passed=mypy_file_checks + mypy_status_check)\n    assert result.ret == pytest.ExitCode.OK\n    result = testdir.runpytest_subprocess(\"--mypy-no-status-check\", *xdist_args)\n    result.assert_outcomes(passed=mypy_file_checks)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_xfail_passes(testdir, xdist_args):\n    \"\"\"Verify that --mypy-xfail passes passes.\"\"\"\n    testdir.makepyfile(\"one: int = 1\")\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    result.assert_outcomes(passed=mypy_file_checks + mypy_status_check)\n    assert result.ret == pytest.ExitCode.OK\n    result = testdir.runpytest_subprocess(\"--mypy-xfail\", *xdist_args)\n    result.assert_outcomes(passed=mypy_file_checks + mypy_status_check)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_xfail_xfails(testdir, xdist_args):\n    \"\"\"Verify that --mypy-xfail xfails failures.\"\"\"\n    testdir.makepyfile(\"one: str = 1\")\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    result.assert_outcomes(failed=mypy_file_checks + mypy_status_check)\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n    result = testdir.runpytest_subprocess(\"--mypy-xfail\", *xdist_args)\n    result.assert_outcomes(xfailed=mypy_file_checks + mypy_status_check)\n    assert result.ret == pytest.ExitCode.OK\n\n\ndef test_mypy_xfail_reports_stdout(testdir, xdist_args):\n    \"\"\"Verify that --mypy-xfail reports stdout from mypy.\"\"\"\n    stdout = \"a distinct string on stdout\"\n    testdir.makepyfile(\n        conftest=f\"\"\"\n            import pytest\n\n            @pytest.hookimpl(trylast=True)\n            def pytest_configure(config):\n                pytest_mypy = config.pluginmanager.getplugin(\"mypy\")\n                mypy_config_stash = config.stash[pytest_mypy.stash_key[\"config\"]]\n                with open(mypy_config_stash.mypy_results_path, mode=\"wb\") as results_f:\n                    pytest_mypy.MypyResults(\n                        opts=[],\n                        args=[],\n                        stdout=\"{stdout}\",\n                        stderr=\"\",\n                        status=0,\n                        path_lines={{}},\n                    ).dump(results_f)\n        \"\"\",\n    )\n    result = testdir.runpytest_subprocess(\"--mypy\", *xdist_args)\n    assert result.ret == pytest.ExitCode.OK\n    assert stdout not in result.stdout.str()\n    result = testdir.runpytest_subprocess(\"--mypy-xfail\", *xdist_args)\n    assert result.ret == pytest.ExitCode.OK\n    assert stdout in result.stdout.str()\n\n\ndef test_error_severity():\n    \"\"\"Verify that non-error lines produce no severity.\"\"\"\n    assert pytest_mypy._error_severity(\"arbitrary line with no error\") is None\n\n\ndef test_mypy_report_style(testdir, xdist_args):\n    \"\"\"Verify that --mypy-report-style functions correctly.\"\"\"\n    module_name = \"unmistakable_module_name\"\n    testdir.makepyfile(\n        **{\n            module_name: \"\"\"\n            def pyfunc(x: int) -> str:\n                return x * 2\n        \"\"\"\n        },\n    )\n    result = testdir.runpytest_subprocess(\"--mypy-report-style\", \"no-path\", *xdist_args)\n    mypy_file_checks = 1\n    mypy_status_check = 1\n    mypy_checks = mypy_file_checks + mypy_status_check\n    result.assert_outcomes(failed=mypy_checks)\n    result.stdout.fnmatch_lines([\"2: error: Incompatible return value*\"])\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n    result = testdir.runpytest_subprocess(\"--mypy-report-style\", \"mypy\", *xdist_args)\n    result.assert_outcomes(failed=mypy_checks)\n    result.stdout.fnmatch_lines(\n        [f\"{module_name}.py:2: error: Incompatible return value*\"]\n    )\n    assert result.ret == pytest.ExitCode.TESTS_FAILED\n"
  },
  {
    "path": "tox.ini",
    "content": "# For more information about tox, see https://tox.readthedocs.io/en/latest/\n[tox]\nminversion = 4.4\nisolated_build = true\nenvlist =\n    py38-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    py39-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    py310-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    py311-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    py312-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    py313-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    static\n    publish\n\n[gh-actions]\npython =\n    3.8: py38-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    3.9: py39-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    3.10: py310-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    3.11: py311-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n    3.12: py312-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}, static, publish\n    3.13: py313-pytest{7.0, 7.x, 8.0, 8.x}-mypy{1.0, 1.x}-xdist{1.x, 2.0, 2.x, 3.0, 3.x}\n\n[testenv]\nconstrain_package_deps = true\ndeps =\n    pytest7.0: pytest ~= 7.0.0\n    pytest7.x: pytest ~= 7.0\n    pytest8.0: pytest ~= 8.0.0\n    pytest8.x: pytest ~= 8.0\n    mypy1.0: mypy ~= 1.0.0\n    mypy1.x: mypy ~= 1.0\n    xdist1.x: pytest-xdist ~= 1.0\n    xdist2.0: pytest-xdist ~= 2.0.0\n    xdist2.x: pytest-xdist ~= 2.0\n    xdist3.0: pytest-xdist ~= 3.0.0\n    xdist3.x: pytest-xdist ~= 3.0\n\n    packaging ~= 21.3\n    pytest-cov ~= 4.1.0\n    pytest-randomly ~= 3.4\nsetenv =\n    COVERAGE_FILE = .coverage.{envname}\ncommands = pytest -p no:mypy {posargs:--cov pytest_mypy --cov-branch --cov-fail-under 100 --cov-report term-missing -n auto}\n\n[pytest]\ntestpaths = tests\n\n[testenv:publish]\npassenv = TWINE_*\nconstrain_package_deps = false\ndeps =\n    build[virtualenv] ~= 1.0.0\n    twine ~= 5.0.0\ncommands =\n    {envpython} -m build --outdir {envtmpdir} .\n    twine {posargs:check} {envtmpdir}/*\n\n[testenv:static]\nbasepython = py312  # pytest.Node.from_parent uses typing.Self\ndeps =\n    bandit ~= 1.7.0\n    black ~= 24.2.0\n    flake8 ~= 7.0.0\n    mypy ~= 1.11.0\n    pytest-xdist >= 3.6.0  # needed for type-checking\ncommands =\n    black --check src tests\n    flake8 src tests\n    mypy --strict src\n    bandit --recursive src\n\n[flake8]\nmax-line-length = 88\nextend-ignore = E203\n"
  }
]