[
  {
    "path": ".github/ISSUE_TEMPLATE/00-bug-issue.md",
    "content": "---\nname: Bug Issue\nabout: Use this template for reporting a bug\nlabels: 'type:bug'\n\n---\n**System information**\n- Have I written custom code (as opposed to using stock example code provided):\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\n- Fairness Indicators version:\n- TensorFlow version:\n- Python version:\n\n\n**Describe the current behavior**\n\n**Describe the expected behavior**\n\n**Standalone code to reproduce the issue**\nProvide a reproducible test case that is the bare minimum necessary to generate\nthe problem. If possible, please share a link to Colab/Jupyter/any notebook.\n\n**Other info / logs** Include any logs or source code that would be helpful to\ndiagnose the problem. If including tracebacks, please include the full\ntraceback. Large logs and files should be attached.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/10-build-installation-issue.md",
    "content": "---\nname: Build/Installation Issue\nabout: Use this template for build/installation issues\nlabels: 'type:build/install'\n\n---\n**System information**\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\n- Fairness Indicators version:\n- Python version:\n- Pip version:\n\n\n\n**Describe the problem**\n\n**Provide the exact sequence of commands / steps that you executed before running into the problem**\n\n\n**Any other info / logs**\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/20-documentation-issue.md",
    "content": "---\nname: Documentation Issue\nabout: Use this template for documentation related issues\nlabels: 'type:docs'\n\n---\nThe Fairness Indicators docs are open source! To get involved, read the\ndocumentation contributor guide:\nhttps://github.com/tensorflow/fairness-indicators/blob/master/CONTRIBUTING.md\n\n## URL(s) with the issue:\n\nPlease provide a link to the documentation entry.\n\n## Description of issue (what needs changing):\n\n### Clear description\n\nFor example, why should someone use this method? How is it useful?\n\n### Correct links\n\nIs the link to the source code correct?\n\n### Parameters defined\n\nAre all parameters defined and formatted correctly?\n\n### Returns defined\n\nAre return values defined?\n\n### Raises listed and defined\n\nAre the errors defined?\n\n### Usage example\n\nIs there currently a usage example for this method?\n\n### Request visuals, if applicable\n\nAre there currently visuals? If not, will it clarify the content?\n\n### Submit a pull request?\n\nAre you planning to also submit a pull request to fix the issue?\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/30-feature-request.md",
    "content": "---\nname: Feature Request\nabout: Use this template for raising a feature request\nlabels: 'type:feature'\n\n---\n**Describe the feature and the current behavior/state.**\n\n**Will this change the current api? How?**\n\n**Who will benefit with this feature?**\n\n**Are you willing to contribute it (Yes/No).**\n\n**Any Other info.**\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/40-performance-issue.md",
    "content": "---\nname: Performance Issue\nabout: Use this template for reporting a performance issue\nlabels: 'type:performance'\n\n---\n**System information**\n- Have I written custom code (as opposed to using stock example code provided):\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):\n- Fairness Indicators version:\n- TensorFlow version:\n- Python version:\n\n**Describe the current behavior**\n\n**Describe the expected behavior**\n\n**Standalone code to reproduce the issue**\nProvide a reproducible test case that is the bare minimum necessary to generate\nthe problem. If possible, please share a link to Colab/Jupyter/any notebook.\n\n**Other info / logs** Include any logs or source code that would be helpful to\ndiagnose the problem. If including tracebacks, please include the full\ntraceback. Large logs and files should be attached.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/50-other-issues.md",
    "content": "---\nname: Other Issues\nabout: Use this template for any other non-support related issues\nlabels: 'type:others'\n\n---\n\nThis template is for miscellaneous issues not covered by the other categories.\n"
  },
  {
    "path": ".github/actions/setup-env/action.yml",
    "content": "name: Set up environment\ndescription: Set up environment and install package\n\ninputs:\n  python-version:\n    default: \"3.10\"\n    required: true\n  package-root-dir:\n    default: \"./\"\n    required: true\n\nruns:\n  using: composite\n\n  steps:\n    - name: Set up Python ${{ matrix.python-version }}\n      uses: actions/setup-python@v5\n      with:\n        python-version: ${{ inputs.python-version }}\n        cache-dependency-path: |\n          ${{ inputs.package-root-dir }}/setup.py\n\n    - name: Install dependencies\n      shell: bash\n      run: |\n        python -m pip install --upgrade pip\n        pip install ${{ inputs.package-root-dir }}[test]\n"
  },
  {
    "path": ".github/workflows/build.yml",
    "content": "name: Build\n\non:\n  push:\n    branches:\n      - master\n  pull_request:\n    branches:\n      - master\n  workflow_dispatch:\n\njobs:\n\n  build:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        python-version: [\"3.9\", \"3.10\"]\n\n    steps:\n    - name: Checkout\n      uses: actions/checkout@v4\n\n    - name: Set up python\n      uses: actions/setup-python@v5\n      with:\n        python-version: ${{ matrix.python-version }}\n\n    - name: Install python build dependencies\n      run: |\n          python -m pip install --upgrade pip build\n\n    - name: Build wheels\n      run: |\n        python -m build --wheel --sdist\n        mkdir wheelhouse\n        mv dist/* wheelhouse/\n    - name: List and check wheels\n      run: |\n        pip install twine pkginfo>=1.11.0\n        ${{ matrix.ls || 'ls -lh' }} wheelhouse/\n        twine check wheelhouse/*\n    - name: Upload wheels\n      uses: actions/upload-artifact@v4\n      with:\n        name: wheels-${{ matrix.python-version }}\n        path: ./wheelhouse/*\n\n  upload_to_pypi:\n    name: Upload to PyPI\n    runs-on: ubuntu-latest\n    if: (github.event_name == 'release' && startsWith(github.ref, 'refs/tags')) || (github.event_name == 'workflow_dispatch')\n    needs: [build]\n    environment:\n      name: pypi\n      url: https://pypi.org/p/fairness-indicators\n    permissions:\n      id-token: write\n    steps:\n      - name: Retrieve wheels\n        uses: actions/download-artifact@v4.1.8\n        with:\n          merge-multiple: true\n          path: wheels\n\n      - name: List the build artifacts\n        run: |\n          ls -lAs wheels/\n\n      - name: Upload to PyPI\n        uses: pypa/gh-action-pypi-publish@release/v1.12\n        with:\n          packages_dir: wheels/\n          repository_url: https://pypi.org/legacy/\n          verify_metadata: false\n          verbose: true\n"
  },
  {
    "path": ".github/workflows/ci-lint.yml",
    "content": "name: pre-commit\n\non:\n  pull_request:\n  push:\n     branches: [master]\n\njobs:\n  pre-commit:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4.1.7\n      with:\n        # Ensure the full history is fetched\n        # This is required to run pre-commit on a specific set of commits\n        # TODO: Remove this when all the pre-commit issues are fixed\n        fetch-depth: 0\n    - uses: actions/setup-python@v5.1.1\n      with:\n        python-version: 3.13\n    - uses: pre-commit/action@v3.0.1\n"
  },
  {
    "path": ".github/workflows/docs.yml",
    "content": "name: Deploy docs\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - 'master'\n  pull_request:\npermissions:\n  contents: write\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout repo\n        uses: actions/checkout@v4\n\n      - name: Configure Git Credentials\n        run: |\n          git config user.name github-actions[bot]\n          git config user.email 41898282+github-actions[bot]@users.noreply.github.com\n        if: (github.event_name != 'pull_request')\n\n      - name: Set up Python 3.9\n        uses: actions/setup-python@v5\n        with:\n          python-version: '3.9'\n          cache: 'pip'\n          cache-dependency-path: |\n            setup.py\n            requirements-docs.txt\n\n      - name: Save time for cache for mkdocs\n        run: echo \"cache_id=$(date --utc '+%V')\" >> $GITHUB_ENV\n\n      - name: Caching\n        uses: actions/cache@v4\n        with:\n          key: mkdocs-material-${{ env.cache_id }}\n          path: .cache\n          restore-keys: |\n            mkdocs-material-\n\n      - name: Install Dependencies\n        run: pip install -r requirements-docs.txt\n\n      - name: Deploy to GitHub Pages\n        run: mkdocs gh-deploy --force\n        if: (github.event_name != 'pull_request')\n\n      - name: Build docs to check for errors\n        run: mkdocs build\n        if: (github.event_name == 'pull_request')\n"
  },
  {
    "path": ".github/workflows/test.yml",
    "content": "name: Tests\non:\n  push:\n    paths-ignore:\n      - '**.md'\n      - 'docs/**'\n  pull_request:\n    branches: [ master ]\n    paths-ignore:\n      - '**.md'\n      - 'docs/**'\n  workflow_dispatch:\n\njobs:\n  tests:\n    if: github.actor != 'copybara-service[bot]'\n    runs-on: ubuntu-latest\n\n    strategy:\n      matrix:\n        python-version: ['3.9', '3.10']\n        package-root-dir: ['./', './tensorboard_plugin']\n\n    steps:\n    - name: Checkout repo\n      uses: actions/checkout@v4\n\n    - name: Set up environment\n      uses: ./.github/actions/setup-env\n      with:\n        python-version: ${{ matrix.python-version }}\n        package-root-dir: ${{ matrix.package-root-dir }}\n\n    - name: Run tests\n      shell: bash\n      run: |\n        cd ${{ matrix.package-root-dir }}\n        pytest\n"
  },
  {
    "path": ".pre-commit-config.yaml",
    "content": "# pre-commit is a tool to perform a predefined set of tasks manually and/or\n# automatically before git commits are made.\n#\n# Config reference: https://pre-commit.com/#pre-commit-configyaml---top-level\n#\n# Common tasks\n#\n# - Register git hooks: pre-commit install --install-hooks\n# - Run on all files:   pre-commit run --all-files\n#\n# These pre-commit hooks are run as CI.\n#\n# NOTE: if it can be avoided, add configs/args in pyproject.toml or below instead of creating a new `.config.file`.\n# https://pre-commit.ci/#configuration\nci:\n  autoupdate_schedule: monthly\n  autofix_commit_msg: |\n    [pre-commit.ci] Apply automatic pre-commit fixes\n\nrepos:\n  # general\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    rev: v4.6.0\n    hooks:\n      - id: end-of-file-fixer\n        exclude: '\\.svg$'\n      - id: trailing-whitespace\n        exclude: '\\.svg$'\n      - id: check-json\n      - id: check-yaml\n        args: [--allow-multiple-documents, --unsafe]\n      - id: check-toml\n\n  - repo: https://github.com/astral-sh/ruff-pre-commit\n    rev: v0.5.6\n    hooks:\n      - id: ruff\n        args: [\"--fix\"]\n      - id: ruff-format\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# How to Contribute\n\nWe'd love to accept your patches and contributions to this project. There are\njust a few small guidelines you need to follow.\n\n## Contributor License Agreement\n\nContributions to this project must be accompanied by a Contributor License\nAgreement. You (or your employer) retain the copyright to your contribution,\nthis simply gives us permission to use and redistribute your contributions as\npart of the project. Head over to <https://cla.developers.google.com/> to see\nyour current agreements on file or to sign a new one.\n\nYou generally only need to submit a CLA once, so if you've already submitted one\n(even if it was for a different project), you probably don't need to do it\nagain.\n\n## Code reviews\n\nAll submissions, including submissions by project members, require review. We\nuse GitHub pull requests for this purpose. Consult\n[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more\ninformation on using pull requests.\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2017, The TensorFlow Authors.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n--------------------------------------------------------------------------------\nMIT\nThe MIT License (MIT)\n\nCopyright (c) 2014-2015, Jon Schlinkert.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n\n--------------------------------------------------------------------------------\nBSD-3-Clause\nCopyright (c) 2016, Daniel Wirtz  All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n* Redistributions of source code must retain the above copyright\n  notice, this list of conditions and the following disclaimer.\n* Redistributions in binary form must reproduce the above copyright\n  notice, this list of conditions and the following disclaimer in the\n  documentation and/or other materials provided with the distribution.\n* Neither the name of its author, nor the names of its contributors\n  may be used to endorse or promote products derived from this software\n  without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "README.md",
    "content": "# Fairness Indicators\n\n![Fairness_Indicators](https://raw.githubusercontent.com/tensorflow/fairness-indicators/master/fairness_indicators/images/fairnessIndicators.png)\n\nFairness Indicators is designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.\n\nThe tool is currently actively used internally by many of our products. We would love to partner with you to understand where Fairness Indicators is most useful, and where added functionality would be valuable. Please reach out at tfx@tensorflow.org. You can provide feedback and feature requests [here](https://github.com/tensorflow/fairness-indicators/issues/new/choose).\n\n## Key links\n* [Introductory Video](https://www.youtube.com/watch?v=pHT-ImFXPQo)\n* [Fairness Indicators Case Study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)\n* [Fairness Indicators Example Colab](https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb)\n* [Pandas DataFrame to Fairness Indicators Case Study](https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb)\n* [Fairness Indicators: Thinking about Fairness Evaluation](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/guide/guidance.md)\n\n## What is Fairness Indicators?\nFairness Indicators enables easy computation of commonly-identified fairness metrics for **binary** and **multiclass** classifiers.\n\nMany existing tools for evaluating fairness concerns don’t work well on large-scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate fairenss metrics across any size of use case.\n\nIn particular, Fairness Indicators includes the ability to:\n\n* Evaluate the distribution of datasets\n* Evaluate model performance, sliced across defined groups of users\n  * Feel confident about your results with confidence intervals and evals at multiple thresholds\n* Dive deep into individual slices to explore root causes and opportunities for improvement\n\nThis [case study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body), complete with [videos](https://www.youtube.com/watch?v=pHT-ImFXPQo) and programming exercises, demonstrates how Fairness Indicators can be used on one of your own products to evaluate fairness concerns over time.\n\n[![](http://img.youtube.com/vi/pHT-ImFXPQo/0.jpg)](http://www.youtube.com/watch?v=pHT-ImFXPQo \"\")\n\n## [Installation](https://pypi.org/project/fairness-indicators/)\n\n`pip install fairness-indicators`\n\nThe pip package includes:\n\n* [**Tensorflow Data Validation (TFDV)**](https://github.com/tensorflow/data-validation) - analyze the distribution of your dataset\n* [**Tensorflow Model Analysis (TFMA)**](https://github.com/tensorflow/model-analysis) - analyze model performance\n  * **Fairness Indicators** - an addition to TFMA that adds fairness metrics and easy performance comparison across slices\n* **The What-If Tool (WIT)**](https://github.com/PAIR-code/what-if-tool - an interactive visual interface designed to probe your models better\n\n### Nightly Packages\n\nFairness Indicators also hosts nightly packages at\nhttps://pypi-nightly.tensorflow.org on Google Cloud. To install the latest\nnightly package, please use the following command:\n\n```bash\npip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators\n```\n\nThis will install the nightly packages for the major dependencies of Fairness\nIndicators such as TensorFlow Data Validation (TFDV), TensorFlow Model Analysis\n(TFMA).\n\n## How can I use Fairness Indicators?\nTensorflow Models\n\n* Access Fairness Indicators as part of the Evaluator component in Tensorflow Extended \\[[docs](https://www.tensorflow.org/tfx/guide/evaluator)]\n* Access Fairness Indicators in Tensorboard when evaluating other real-time metrics \\[[docs](https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md)]\n\nNot using existing Tensorflow tools? No worries!\n\n* Download the Fairness Indicators pip package, and use Tensorflow Model Analysis as a standalone tool \\[[docs](https://www.tensorflow.org/tfx/guide/fairness_indicators)]\n* Model Agnostic TFMA enables you to compute Fairness Indicators based on the output of any model \\[[docs](https://www.tensorflow.org/tfx/guide/fairness_indicators)]\n\n## [Examples](https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials) directory contains several examples.\n\n* [Fairness_Indicators_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb) gives an overview of Fairness Indicators in [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/guide/tfma) and how to use it with a real dataset. This notebook also goes over [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) and [What-If Tool](https://pair-code.github.io/what-if-tool/), two tools for analyzing TensorFlow models that are packaged with Fairness Indicators.\n* [Fairness_Indicators_on_TF_Hub.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb) demonstrates how to use Fairness Indicators to compare models trained on different [text embeddings](https://en.wikipedia.org/wiki/Word_embedding). This notebook uses text embeddings from [TensorFlow Hub](https://www.tensorflow.org/hub), TensorFlow's library to publish, discover, and reuse model components.\n* [Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb)\ndemonstrates how to visualize Fairness Indicators in TensorBoard.\n\n## More questions?\nFor more information on how to think about fairness evaluation in the context of your use case, see [this link](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/guide/guidance.md).\n\nIf you have found a bug in Fairness Indicators, please file a [GitHub issue](https://github.com/tensorflow/fairness-indicators/issues/new/choose) with as much supporting information as you can provide.\n\n## Compatible versions\n\nThe following table shows the  package versions that are\ncompatible with each other. This is determined by our testing framework, but\nother *untested* combinations may also work.\n\n|fairness-indicators                                                                        | tensorflow         | tensorflow-data-validation | tensorflow-model-analysis |\n|-------------------------------------------------------------------------------------------|--------------------|----------------------------|---------------------------|\n|[GitHub master](https://github.com/tensorflow/fairness-indicators/blob/master/RELEASE.md)  | nightly (1.x/2.x)  | 1.17.0                     | 0.48.0                    |\n|[v0.48.0](https://github.com/tensorflow/fairness-indicators/blob/v0.48.0/RELEASE.md)       | 2.17               | 1.17.0                     | 0.48.0                    |\n|[v0.47.0](https://github.com/tensorflow/fairness-indicators/blob/v0.47.0/RELEASE.md)       | 2.16               | 1.16.1                     | 0.47.1                    |\n|[v0.46.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/RELEASE.md)       | 2.15               | 1.15.1                     | 0.46.0                    |\n|[v0.44.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/RELEASE.md)       | 2.12               | 1.13.0                     | 0.44.0                    |\n|[v0.43.0](https://github.com/tensorflow/fairness-indicators/blob/v0.43.0/RELEASE.md)       | 2.11               | 1.12.0                     | 0.43.0                    |\n|[v0.42.0](https://github.com/tensorflow/fairness-indicators/blob/v0.42.0/RELEASE.md)       | 1.15.5 / 2.10      | 1.11.0                     | 0.42.0                    |\n|[v0.41.0](https://github.com/tensorflow/fairness-indicators/blob/v0.41.0/RELEASE.md)       | 1.15.5 / 2.9       | 1.10.0                     | 0.41.0                    |\n|[v0.40.0](https://github.com/tensorflow/fairness-indicators/blob/v0.40.0/RELEASE.md)       | 1.15.5 / 2.9       | 1.9.0                      | 0.40.0                    |\n|[v0.39.0](https://github.com/tensorflow/fairness-indicators/blob/v0.39.0/RELEASE.md)       | 1.15.5 / 2.8       | 1.8.0                      | 0.39.0                    |\n|[v0.38.0](https://github.com/tensorflow/fairness-indicators/blob/v0.38.0/RELEASE.md)       | 1.15.5 / 2.8       | 1.7.0                      | 0.38.0                    |\n|[v0.37.0](https://github.com/tensorflow/fairness-indicators/blob/v0.37.0/RELEASE.md)       | 1.15.5 / 2.7       | 1.6.0                      | 0.37.0                    |\n|[v0.36.0](https://github.com/tensorflow/fairness-indicators/blob/v0.36.0/RELEASE.md)       | 1.15.2 / 2.7       | 1.5.0                      | 0.36.0                    |\n|[v0.35.0](https://github.com/tensorflow/fairness-indicators/blob/v0.35.0/RELEASE.md)       | 1.15.2 / 2.6       | 1.4.0                      | 0.35.0                    |\n|[v0.34.0](https://github.com/tensorflow/fairness-indicators/blob/v0.34.0/RELEASE.md)       | 1.15.2 / 2.6       | 1.3.0                      | 0.34.0                    |\n|[v0.33.0](https://github.com/tensorflow/fairness-indicators/blob/v0.33.0/RELEASE.md)       | 1.15.2 / 2.5       | 1.2.0                      | 0.33.0                    |\n|[v0.30.0](https://github.com/tensorflow/fairness-indicators/blob/v0.30.0/RELEASE.md)       | 1.15.2 / 2.4       | 0.30.0                     | 0.30.0                    |\n|[v0.29.0](https://github.com/tensorflow/fairness-indicators/blob/v0.29.0/RELEASE.md)       | 1.15.2 / 2.4       | 0.29.0                     | 0.29.0                    |\n|[v0.28.0](https://github.com/tensorflow/fairness-indicators/blob/v0.28.0/RELEASE.md)       | 1.15.2 / 2.4       | 0.28.0                     | 0.28.0                    |\n|[v0.27.0](https://github.com/tensorflow/fairness-indicators/blob/v0.27.0/RELEASE.md)       | 1.15.2 / 2.4       | 0.27.0                     | 0.27.0                    |\n|[v0.26.0](https://github.com/tensorflow/fairness-indicators/blob/v0.26.0/RELEASE.md)       | 1.15.2 / 2.3       | 0.26.0                     | 0.26.0                    |\n|[v0.25.0](https://github.com/tensorflow/fairness-indicators/blob/v0.25.0/RELEASE.md)       | 1.15.2 / 2.3       | 0.25.0                     | 0.25.0                    |\n|[v0.24.0](https://github.com/tensorflow/fairness-indicators/blob/v0.24.0/RELEASE.md)       | 1.15.2 / 2.3       | 0.24.0                     | 0.24.0                    |\n|[v0.23.0](https://github.com/tensorflow/fairness-indicators/blob/v0.23.0/RELEASE.md)       | 1.15.2 / 2.3       | 0.23.0                     | 0.23.0                    |\n"
  },
  {
    "path": "RELEASE.md",
    "content": "<!-- mdlint off(HEADERS_TOO_MANY_H1) -->\n\n# Current Version (Still in Development)\n\n## Major Features and Improvements\n\n## Bug Fixes and Other Changes\n\n## Breaking Changes\n\n## Deprecations\n\n# Version 0.48.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=2.17,<2.18`.\n*   Depends on `tensorflow-data-validation>=1.17.0,<1.18.0`.\n*   Depends on `tensorflow-model-analysis>=0.48,<0.49`.\n*   Depends on `protobuf>=4.21.6,<6.0.0`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.47.0\n\n## Major Features and Improvements\n\n * Add fairness indicator metrics in the third_party library.\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=2.16,<2.17`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.46.0\n\n## Major Features and Improvements\n\n*  Update example model to use Keras models instead of estimators.\n\n## Bug Fixes and Other Changes\n\n*   N/A\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*  Deprecated python 3.8 support\n\n# Version 0.44.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*  Depends on `tensorflow>=2.12.0,<2.13`.\n*  Depends on `tensorflow-data-validation>=1.13.0,<1.14.0`.\n*  Depends on `tensorflow-model-analysis>=0.44,<0.45`.\n*  Depends on `protobuf>=3.20.3,<5`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   Deprecating python3.7 support.\n\n# Version 0.43.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=2.11,<2.12`\n*   Depends on `tensorflow-data-validation>=1.11.0,<1.12.0`.\n*   Depends on `tensorflow-model-analysis>=0.42,<0.43`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.42.0\n\n## Major Features and Improvements\n\n*   This is the last version that supports TensorFlow 1.15.x. TF 1.15.x support\n    will be removed in the next version. Please check the\n    [TF2 migration guide](https://www.tensorflow.org/guide/migrate) to migrate\n    to TF2.\n\n## Bug Fixes and Other Changes\n\n*   N/A\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.41.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.10.0,<1.11.0`.\n*   Depends on `tensorflow-model-analysis>=0.41,<0.42`.\n*   Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,!=2.9.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.40.0\n\n## Major Features and Improvements\n\n*   Allow counterfactual metrics to be calculated from predictions instead of\n    only features.\n*   Add precision and recall to the set of fairness indicators metrics.\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.9.0,<1.10.0`.\n*   Depends on `tensorflow-model-analysis>=0.40,<0.41`.\n*   Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.39.0\n\n## Major Features and Improvements\n\n*   Allow counterfactual metrics to be calculated from predictions instead of\n    only features.\n*   Add precision and recall to the set of fairness indicators metrics.\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.8.0,<1.9.0`.\n*   Depends on `tensorflow-model-analysis>=0.39,<0.40`.\n*   Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.38.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.7.0,<1.8.0`.\n*   Depends on `tensorflow-model-analysis>=0.38,<0.39`.\n*   Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.37.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*  Fix Fairness Indicators UI bug with overlapping charts when comparing EvalResults\n*   Depends on `tensorflow-data-validation>=1.6.0,<1.7.0`.\n*   Depends on `tensorflow-model-analysis>=0.37,<0.38`.\n*   Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.36.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.5.0,<1.6.0`.\n*   Depends on `tensorflow-model-analysis>=0.36,<0.37`.\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.35.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=1.4.0,<1.5.0`.\n*   Depends on `tensorflow-model-analysis>=0.35,<0.36`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   Deprecating python 3.6 support.\n\n# Version 0.34.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,<3`.\n*   Depends on `tensorflow-data-validation>=1.3.0,<1.4.0`.\n*   Depends on `tensorflow-model-analysis>=0.34,<0.35`.\n\n## Breaking Changes\n\n*   Drop Py2 support.\n\n## Deprecations\n\n*   N/A\n\n# Version 0.33.0\n\n## Major Features and Improvements\n\n*   Porting Counterfactual Fairness metrics into FI UI.\n\n## Bug Fixes and Other Changes\n\n*   Improve rendering of HTML stubs for Fairness Indicators UI\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3`.\n*   Depends on `protobuf>=3.13,<4`.\n*   Depends on `tensorflow-data-validation>=1.2.0,<1.3.0`.\n*   Depends on `tensorflow-model-analysis>=0.33,<0.34`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.30.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.5.*,<3`.\n*   Depends on `tensorflow-data-validation>=0.30,<0.31`.\n*   Depends on `tensorflow-model-analysis>=0.30,<0.31`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.29.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=0.29,<0.30`.\n*   Depends on `tensorflow-model-analysis>=0.29,<0.30`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.28.0\n\n## Major Features and Improvements\n\n*   In Fairness Indicators UI, sort metrics list to show common metrics first\n*   For lift, support negative values in bar chart.\n*   Adding two new metrics - Flip Count and Flip Rate to evaluate Counterfactual\n    Fairness.\n*   Add Lift metrics under addons/fairness.\n*   Porting Lift metrics into FI UI.\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-data-validation>=0.28,<0.29`.\n*   Depends on `tensorflow-model-analysis>=0.28,<0.29`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.27.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug fixes and other changes\n\n*   Added test cases for DLVM testing.\n*   Move the util files to a seperate folder.\n*   Add `tensorflow-hub` as a dependency because it's used inside the\n    example_model.py.\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,<3`.\n*   Depends on `tensorflow-data-validation>=0.27,<0.28`.\n*   Depends on `tensorflow-model-analysis>=0.27,<0.28`.\n\n## Breaking changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.26.0\n\n## Major Features and Improvements\n\n*   Sorting fairness metrics table rows to keep slices in order with slice drop\n    down in the UI.\n\n## Bug fixes and other changes\n\n*   Update fairness_indicators.documentation.examples.util to TensorFlow 2.0.\n*   Table now displays 3 decimal places instead of 2.\n*   Fix the bug that metric list won't refresh if the input eval result changed.\n*   Remove d3-tip dependency.\n*   Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.4.*,<3`.\n*   Depends on `tensorflow-data-validation>=0.26,<0.27`.\n*   Depends on `tensorflow-model-analysis>=0.26,<0.27`.\n\n## Breaking changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.25.0\n\n## Major Features and Improvements\n\n*   Add workflow buttons to Fairness Indicators UI, providing tutorial on how to\n    configure metrics and parameters, and how to interpret the results.\n*   Add metric definitions as tooltips in the metric selector UI\n*   Removing prefix from metric names in graph titles in UI.\n*   From this release Fairness Indicators will also be hosting nightly packages\n    on https://pypi-nightly.tensorflow.org. To install the nightly package use\n    the following command:\n\n    ```\n    pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators\n    ```\n\n    Note: These nightly packages are unstable and breakages are likely to\n    happen. The fix could often take a week or more depending on the complexity\n    involved for the wheels to be available on the PyPI cloud service. You can\n    always use the stable version of Fairness Indicators available on PyPI by\n    running the command `pip install fairness-indicators` .\n\n## Bug fixes and other changes\n\n*   Update table colors.\n*   Modify privacy note in Fairness Indicators UI.\n*   Depends on `tensorflow-data-validation>=0.25,<0.26`.\n*   Depends on `tensorflow-model-analysis>=0.25,<0.26`.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n* N/A\n\n# Version 0.24.0\n\n## Major Features and Improvements\n\n*   Made the Fairness Indicators UI thresholds drop down list sorted.\n\n## Bug fixes and other changes\n\n*   Fix in the issue where the Sort menu is not hidden when there is no model\n    comparison.\n*   Depends on `tensorflow-data-validation>=0.24,<0.25`.\n*   Depends on `tensorflow-model-analysis>=0.24,<0.25`.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n*   Deprecated Py3.5 support.\n\n# Version 0.23.1\n\n## Major Features and Improvements\n\n* N/A\n\n## Bug fixes and other changes\n\n*  Fix broken import path in Fairness_Indicators_Example_Colab and Fairness_Indicators_on_TF_Hub_Text_Embeddings.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n* N/A\n\n# Version 0.23.0\n\n## Major Features and Improvements\n\n* N/A\n\n## Bug fixes and other changes\n\n*  Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,<3`.\n*  Depends on `tensorflow-data-validation>=0.23,<0.24`.\n*  Depends on `tensorflow-model-analysis>=0.23,<0.24`.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n*  Deprecating Py2 support.\n*  Note: We plan to drop py3.5 support in the next release.\n"
  },
  {
    "path": "docs/__init__.py",
    "content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "docs/guide/_index.yaml",
    "content": "book_path: /responsible_ai/_book.yaml\nproject_path: /responsible_ai/_project.yaml\ntitle: Fairness Indicators\nlanding_page:\n  custom_css_path: /site-assets/css/style.css\n  nav: left\n  meta_tags:\n  - name: description\n    content: >\n      Fairness Indicators tool suite for TensorFlow.\n  rows:\n  - classname: devsite-landing-row-100\n  - heading: Fairness Indicators\n    options:\n    - description-50\n    items:\n    - description: >\n        <p>\n        Fairness Indicators is a library that enables easy computation of commonly-identified\n        fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool\n        suite, you can:\n        <ul>\n          <li>\n              Compute commonly-identified fairness metrics for classification models\n          </li>\n          <li>\n              Compare model performance across subgroups to a baseline, or to other models\n          </li>\n          <li>\n              Use confidence intervals to surface statistically significant disparities\n          </li>\n          <li>\n              Perform evaluation over multiple thresholds\n          </li>\n        </ul>\n        </p>\n        <p>\n        Use Fairness Indicators via the:\n        <ul>\n          <li>\n              <a href=\"https://www.tensorflow.org/tfx/guide/evaluator\">Evaluator\n              component </a>in a <a href =\"https://www.tensorflow.org/tfx\">TFX pipeline</a>\n          </li>\n          <li>\n              <a href=\"https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md\">\n              TensorBoard plugin</a>\n          </li>\n          <li>\n              <a href=\"https://www.tensorflow.org/tfx/guide/fairness_indicators\">TensorFlow Model\n              Analysis library</a>\n          </li>\n          <li>\n              <a href=\"https://www.tensorflow.org/tfx/guide/fairness_indicators#using_fairness_indicators_with_non-tensorflow_models\">Model\n              Agnostic TFMA library</a>\n          </li>\n\n    - code_block: |\n          <pre class = \"prettyprint\">\n          eval_config_pbtxt = \"\"\"\n\n          model_specs {\n              label_key: \"%s\"\n          }\n\n          metrics_specs {\n              metrics {\n                  class_name: \"FairnessIndicators\"\n                  config: '{ \"thresholds\": [0.25, 0.5, 0.75] }'\n              }\n              metrics {\n                  class_name: \"ExampleCount\"\n              }\n          }\n\n          slicing_specs {}\n          slicing_specs {\n              feature_keys: \"%s\"\n          }\n\n          options {\n              compute_confidence_intervals { value: False }\n              disabled_outputs{values: \"analysis\"}\n          }\n          \"\"\" % (LABEL_KEY, GROUP_KEY)\n          </pre>\n  - classname: devsite-landing-row-100\n    items:\n    - description: >\n        <h3>Resources</h3>\n\n  - classname: devsite-landing-row-cards\n    items:\n    - heading: \"ML Practicum: Fairness in Perspective API using Fairness Indicators\"\n      image_path: /responsible_ai/fairness_indicators/images/mlpracticum.png\n      path: \"https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body\"\n      buttons:\n      - label: \"Try the Case Study\"\n        path: \"https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body\"\n\n    - heading: \"Fairness Indicators on the TensorFlow blog\"\n      image_path: /resources/images/tf-logo-card-16x9.png\n      path: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html\n      buttons:\n      - label: \"Read on the TensorFlow blog\"\n        path: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html\n\n    - heading: \"Fairness Indicators on GitHub\"\n      image_path: /resources/images/github-card-16x9.png\n      path: https://github.com/tensorflow/fairness-indicators\n      buttons:\n      - label: \"View on GitHub\"\n        path: https://github.com/tensorflow/fairness-indicators\n\n  - classname: devsite-landing-row-cards\n    items:\n    - heading: \"Fairness Indicators on the Google AI Blog\"\n      image_path: /responsible_ai/fairness_indicators/images/googleai.png\n      path: https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html\n      buttons:\n      - label: \"Read on Google AI blog\"\n        path: https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html\n\n    - heading: \"Fairness Indicators at Google I/O\"\n      path: https://www.youtube.com/watch?v=6CwzDoE8J4M\n      youtube_id: 6CwzDoE8J4M?rel=0&show_info=0\n      buttons:\n      - label: \"Watch the video\"\n        path: https://www.youtube.com/watch?v=6CwzDoE8J4M\n"
  },
  {
    "path": "docs/guide/_toc.yaml",
    "content": "toc:\n- title: Overview\n  path: /responsible_ai/fairness_indicators/guide/\n- title: Thinking about fairness evaluation\n  path: /responsible_ai/fairness_indicators/guide/guidance\n"
  },
  {
    "path": "docs/guide/guidance.md",
    "content": "# Fairness Indicators: Thinking about Fairness Evaluation\n\nFairness Indicators is a useful tool for evaluating _binary_ and _multi-class_\nclassifiers for fairness. Eventually, we hope to expand this tool, in\npartnership with all of you, to evaluate even more considerations.\n\nKeep in mind that quantitative evaluation is only one part of evaluating a\nbroader user experience. Start by thinking about the different _contexts_\nthrough which a user may experience your product. Who are the different types of\nusers your product is expected to serve? Who else may be affected by the\nexperience?\n\nWhen considering AI's impact on people, it is important to always remember that\nhuman societies are extremely complex! Understanding people, and their social\nidentities, social structures and cultural systems are each huge fields of open\nresearch in their own right. Throw in the complexities of cross-cultural\ndifferences around the globe, and getting even a foothold on understanding\nsocietal impact can be challenging. Whenever possible, it is recommended you\nconsult with appropriate domain experts, which may include social scientists,\nsociolinguists, and cultural anthropologists, as well as with members of the\npopulations on which technology will be deployed.\n\nA single model, for example, the toxicity model that we leverage in the\n[example colab](../../tutorials/Fairness_Indicators_Example_Colab),\ncan be used in many different contexts. A toxicity model deployed on a website\nto filter offensive comments, for example, is a very different use case than the\nmodel being deployed in an example web UI where users can type in a sentence and\nsee what score the model gives. Depending on the use case, and how users\nexperience the model prediction, your product will have different risks,\neffects, and opportunities and you may want to evaluate for different fairness\nconcerns.\n\nThe questions above are the foundation of what ethical considerations, including\nfairness, you may want to take into account when designing and developing your\nML-based product. These questions also motivate which metrics and which groups\nof users you should use the tool to evaluate.\n\nBefore diving in further, here are three recommended resources for getting\nstarted:\n\n*   **[The People + AI Guidebook](https://pair.withgoogle.com/) for\n    Human-centered AI design:** This guidebook is a great resource for the\n    questions and aspects to keep in mind when designing a machine-learning\n    based product. While we created this guidebook with designers in mind, many\n    of the principles will help answer questions like the one posed above.\n*   **[Our Fairness Lessons Learned](https://www.youtube.com/watch?v=6CwzDoE8J4M):**\n    This talk at Google I/O discusses lessons we have learned in our goal to\n    build and design inclusive products.\n*   **[ML Crash Course: Fairness](https://developers.google.com/machine-learning/crash-course/fairness/video-lecture):**\n    The ML Crash Course has a 70 minute section dedicated to identifying and\n    evaluating fairness concerns\n\nSo, why look at individual slices? Evaluation over individual slices is\nimportant as strong overall metrics can obscure poor performance for certain\ngroups. Similarly, performing well for a certain metric (accuracy, AUC) doesn’t\nalways translate to acceptable performance for other metrics (false positive\nrate, false negative rate) that are equally important in assessing opportunity\nand harm for users.\n\nThe below sections will walk through some of the aspects to consider.\n\n## Which groups should I slice by?\n\nIn general, a good practice is to slice by as many groups as may be affected by\nyour product, since you never know when performance might differ for one of the\nother. However, if you aren’t sure, think about the different users who may be\nengaging with your product, and how they might be affected. Consider,\nespecially, slices related to sensitive characteristics such as race, ethnicity,\ngender, nationality, income, sexual orientation, and disability status.\n\n**What if I don’t have data labeled for the slices I want to investigate?**\n\nGood question. We know that many datasets don’t have ground-truth labels for\nindividual identity attributes.\n\nIf you find yourself in this position, we recommend a few approaches:\n\n1.  Identify if there _are_ attributes that you have that may give you some\n    insight into the performance across groups. For example, _geography_ while\n    not equivalent to ethnicity & race, may help you uncover any disparate\n    patterns in performance\n1.  Identify if there are representative public datasets that might map well to\n    your problem. You can find a range of diverse and inclusive datasets on the\n    [Google AI site](https://ai.google/responsibilities/responsible-ai-practices/?category=fairness),\n    which include\n    [Project Respect](https://www.blog.google/technology/ai/fairness-matters-promoting-pride-and-respect-ai/),\n    [Inclusive Images](https://www.kaggle.com/c/inclusive-images-challenge), and\n    [Open Images Extended](https://ai.google/tools/datasets/open-images-extended-crowdsourced/),\n    among others.\n1.  Leverage rules or classifiers, when relevant, to label your data with\n    objective surface-level attributes. For example, you can label text as to\n    whether or not there is an identity term _in_ the sentence. Keep in mind\n    that classifiers have their own challenges, and if you’re not careful, may\n    introduce another layer of bias as well. Be clear about what your classifier\n    is <span style=\"text-decoration:underline;\">actually</span> classifying. For\n    example, an age classifier on images is in fact classifying _perceived age_.\n    Additionally, when possible, leverage surface-level attributes that _can_ be\n    objectively identified in the data. For example, it is ill-advised to build\n    an image classifier for race or ethnicity, because these are not visual\n    traits that can be defined in an image. A classifier would likely pick up on\n    proxies or stereotypes. Instead, building a classifier for skin tone may be\n    a more appropriate way to label and evaluate an image. Lastly, ensure high\n    accuracy for classifiers labeling such attributes.\n1.  Find more representative data that is labeled\n\n**Always make sure to evaluate on multiple, diverse datasets.**\n\nIf your evaluation data is not adequately representative of your user base, or\nthe types of data likely to be encountered, you may end up with deceptively good\nfairness metrics. Similarly, high model performance on one dataset doesn’t\nguarantee high performance on others.\n\n**Keep in mind subgroups aren’t always the best way to classify individuals.**\n\nPeople are multidimensional and belong to more than one group, even within a\nsingle dimension -- consider someone who is multiracial, or belongs to multiple\nracial groups. Also, while overall metrics for a given racial group may look\nequitable, particular interactions, such as race and gender together may show\nunintended bias. Moreover, many subgroups have fuzzy boundaries which are\nconstantly being redrawn.\n\n**When have I tested enough slices, and how do I know which slices to test?**\n\nWe acknowledge that there are a vast number of groups or slices that may be\nrelevant to test, and when possible, we recommend slicing and evaluating a\ndiverse and wide range of slices and then deep-diving where you spot\nopportunities for improvement. It is also important to acknowledge that even\nthough you may not see concerns on slices you have tested, that doesn’t imply\nthat your product works for _all_ users, and getting diverse user feedback and\ntesting is important to ensure that you are continually identifying new\nopportunities.\n\nTo get started, we recommend thinking through your particular use case and the\ndifferent ways users may engage with your product. How might different users\nhave different experiences? What does that mean for slices you should evaluate?\nCollecting feedback from diverse users may also highlight potential slices to\nprioritize.\n\n## Which metrics should I choose?\n\nWhen selecting which metrics to evaluate for your system, consider who will be\nexperiencing your model, how it will be experienced, and the effects of that\nexperience.\n\nFor example, how does your model give people more dignity or autonomy, or\npositively impact their emotional, physical or financial wellbeing? In contrast,\nhow could your model’s predictions reduce people's dignity or autonomy, or\nnegatively impact their emotional, physical or financial wellbeing?\n\n**In general, we recommend slicing _all your existing performance metrics as\ngood practice. We also recommend evaluating your metrics across\n<span style=\"text-decoration:underline;\">multiple thresholds</span>_** in order\nto understand how the threshold can affect the performance for different groups.\n\nIn addition, if there is a predicted label which is uniformly \"good\" or “bad”,\nthen consider reporting (for each subgroup) the rate at which that label is\npredicted. For example, a “good” label would be a label whose prediction grants\na person access to some resource, or enables them to perform some action.\n\n## Critical fairness metrics for classification\n\nWhen thinking about a classification model, think about the effects of _errors_\n(the differences between the actual “ground truth” label, and the label from the\nmodel). If some errors may pose more opportunity or harm to your users, make\nsure you evaluate the rates of these errors across groups of users. These error\nrates are defined below, in the metrics currently supported by the Fairness\nIndicators beta.\n\n**Over the course of the next year, we hope to release case studies of different\nuse cases and the metrics associated with these so that we can better highlight\nwhen different metrics might be most appropriate.**\n\n**Metrics available today in Fairness Indicators**\n\nNote: There are many valuable fairness metrics that are not currently supported\nin the Fairness Indicators beta. As we continue to add more metrics, we will\ncontinue to add guidance for these metrics, here. Below, you can access\ninstructions to add your own metrics to Fairness Indicators. Additionally,\nplease reach out to [tfx@tensorflow.org](mailto:tfx@tensorflow.org) if there are\nmetrics that you would like to see. We hope to partner with you to build this\nout further.\n\n**Positive Rate / Negative Rate**\n\n*   _<span style=\"text-decoration:underline;\">Definition:</span>_ The percentage\n    of data points that are classified as positive or negative, independent of\n    ground truth\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Demographic\n    Parity and Equality of Outcomes, when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use this metric:</span>_\n    Fairness use cases where having equal final percentages of groups is\n    important\n\n**True Positive Rate / False Negative Rate**\n\n*   _<span style=\"text-decoration:underline;\">Definition:</span>_ The percentage\n    of positive data points (as labeled in the ground truth) that are\n    _correctly_ classified as positive, or the percentage of positive data\n    points that are _incorrectly_ classified as negative\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Equality of\n    Opportunity (for the positive class), when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use this metric:</span>_\n    Fairness use cases where it is important that the same % of qualified\n    candidates are rated positive in each group. This is most commonly\n    recommended in cases of classifying positive outcomes, such as loan\n    applications, school admissions, or whether content is kid-friendly\n\n**True Negative Rate / False Positive Rate**\n\n*   _<span style=\"text-decoration:underline;\">Definition:</span>_ The percentage\n    of negative data points (as labeled in the ground truth) that are correctly\n    classified as negative, or the percentage of negative data points that are\n    incorrectly classified as positive\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Equality of\n    Opportunity (for the negative class), when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use this metric:</span>_\n    Fairness use cases where error rates (or misclassifying something as\n    positive) are more concerning than classifying the positives. This is most\n    common in abuse cases, where _positives_ often lead to negative actions.\n    These are also important for Facial Analysis Technologies such as face\n    detection or face attributes\n\nNote: When both “positive” and “negative” mistakes are equally important, the\nmetric is called “equality of\n<span style=\"text-decoration:underline;\">odds</span>”. This can be measured by\nevaluating and aiming for equality across both the TNR & FNR, or both the TPR &\nFPR. For example, an app that counts how many cars go past a stop sign is\nroughly equally bad whether or not it accidentally includes an extra car (a\nfalse positive) or accidentally excludes a car (a false negative).\n\n**Accuracy & AUC**\n\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Predictive\n    Parity, when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use these metrics:</span>_\n    Cases where precision of the task is most critical (not necessarily in a\n    given direction), such as face identification or face clustering\n\n**False Discovery Rate**\n\n*   _<span style=\"text-decoration:underline;\">Definition:</span>_ The percentage\n    of negative data points (as labeled in the ground truth) that are\n    incorrectly classified as positive out of all data points classified as\n    positive. This is also the inverse of PPV\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Predictive\n    Parity (also known as Calibration), when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use this metric:</span>_\n    Cases where the fraction of correct positive predictions should be equal\n    across subgroups\n\n**False Omission Rate**\n\n*   _<span style=\"text-decoration:underline;\">Definition:</span>_ The percentage\n    of positive data points (as labeled in the ground truth) that are\n    incorrectly classified as negative out of all data points classified as\n    negative. This is also the inverse of NPV\n*   _<span style=\"text-decoration:underline;\">Relates to:</span>_ Predictive\n    Parity (also known as Calibration), when equal across subgroups\n*   _<span style=\"text-decoration:underline;\">When to use this metric:</span>_\n    Cases where the fraction of correct negative predictions should be equal\n    across subgroups\n\nNote: When used together, False Discovery Rate and False Omission Rate relate to\nConditional Use Accuracy Equality, when FDR and FOR are both equal across\nsubgroups. FDR and FOR are also similar to FPR and FNR, where FDR/FOR compare\nFP/FN to predicted negative/positive data points, and FPR/FNR compare FP/FN to\nground truth negative/positive data points. FDR/FOR can be used instead of\nFPR/FNR when predictive parity is more critical than equality of opportunity.\n\n**Overall Flip Rate / Positive to Negative Prediction Flip Rate / Negative to\nPositive Prediction Flip Rate**\n\n*   *<span style=\"text-decoration:underline;\">Definition:</span>* The\n    probability that the classifier gives a different prediction if the identity\n    attribute in a given feature were changed.\n*   *<span style=\"text-decoration:underline;\">Relates to:</span>* Counterfactual\n    fairness\n*   *<span style=\"text-decoration:underline;\">When to use this metric:</span>*\n    When determining whether the model’s prediction changes when the sensitive\n    attributes referenced in the example is removed or replaced. If it does,\n    consider using the Counterfactual Logit Pairing technique within the\n    Tensorflow Model Remediation library.\n\n**Flip Count / Positive to Negative Prediction Flip Count / Negative to Positive\nPrediction Flip Count** *\n\n*   *<span style=\"text-decoration:underline;\">Definition:</span>* The number of\n    times the classifier gives a different prediction if the identity term in a\n    given example were changed.\n*   *<span style=\"text-decoration:underline;\">Relates to:</span>* Counterfactual\n    fairness\n*   *<span style=\"text-decoration:underline;\">When to use this metric:</span>*\n    When determining whether the model’s prediction changes when the sensitive\n    attributes referenced in the example is removed or replaced. If it does,\n    consider using the Counterfactual Logit Pairing technique within the\n    Tensorflow Model Remediation library.\n\n**Examples of which metrics to select**\n\n*   _Systematically failing to detect faces in a camera app can lead to a\n    negative user experience for certain user groups._ In this case, false\n    negatives in a face detection system may lead to product failure, while a\n    false positive (detecting a face when there isn’t one) may pose a slight\n    annoyance to the user. Thus, evaluating and minimizing the false negative\n    rate is important for this use case.\n*   _Unfairly marking text comments from certain people as “spam” or “high\n    toxicity” in a moderation system leads to certain voices being silenced._ On\n    one hand, a high false positive rate leads to unfair censorship. On the\n    other, a high false negative rate could lead to a proliferation of toxic\n    content from certain groups, which may both harm the user and constitute a\n    representational harm for those groups. Thus, both metrics are important to\n    consider, in addition to metrics which take into account all types of errors\n    such as accuracy or AUC.\n\n**Don’t see the metrics you’re looking for?**\n\nFollow the documentation\n[here](https://tensorflow.github.io/model-analysis/post_export_metrics/)\nto add you own custom metric.\n\n## Final notes\n\n**A gap in metric between two groups can be a sign that your model may have\nunfair skews**. You should interpret your results according to your use case.\nHowever, the first sign that you may be treating one set of users _unfairly_ is\nwhen the metrics between that set of users and your overall are significantly\ndifferent. Make sure to account for confidence intervals when looking at these\ndifferences. When you have too few samples in a particular slice, the difference\nbetween metrics may not be accurate.\n\n**Achieving equality across groups on Fairness Indicators doesn’t mean the model\nis fair.** Systems are highly complex, and achieving equality on one (or even\nall) of the provided metrics can’t guarantee Fairness.\n\n**Fairness evaluations should be run throughout the development process and\npost-launch (not the day before launch).** Just like improving your product is\nan ongoing process and subject to adjustment based on user and market feedback,\nmaking your product fair and equitable requires ongoing attention. As different\naspects of the model changes, such as training data, inputs from other models,\nor the design itself, fairness metrics are likely to change. “Clearing the bar”\nonce isn’t enough to ensure that all of the interacting components have remained\nintact over time.\n\n**Adversarial testing should be performed for rare, malicious examples.**\nFairness evaluations aren’t meant to replace adversarial testing. Additional\ndefense against rare, targeted examples is crucial as these examples probably\nwill not manifest in training or evaluation data.\n"
  },
  {
    "path": "docs/index.md",
    "content": "# Fairness Indicators\n\n/// html | div[style='float: left; width: 50%;']\nFairness Indicators is a library that enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool suite, you can:\n\n- Compute commonly-identified fairness metrics for classification models\n- Compare model performance across subgroups to a baseline, or to other models\n- Use confidence intervals to surface statistically significant disparities\n- Perform evaluation over multiple thresholds\n\nUse Fairness Indicators via the:\n\n- [Evaluator component](https://tensorflow.github.io/tfx/guide/evaluator/) in a [TFX pipeline](https://tensorflow.github.io/tfx/)\n- [TensorBoard plugin](https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md)\n- [TensorFlow Model Analysis library](https://tensorflow.github.io/tfx/guide/fairness_indicators/)\n- [Model Agnostic TFMA library](https://tensorflow.github.io/tfx/guide/fairness_indicators/#using-fairness-indicators-with-non-tensorflow-models)\n<!-- TODO: Change the TFMA link when the new docs are deployed -->\n///\n\n/// html | div[style='float: right;width: 50%;']\n```python\neval_config_pbtxt = \"\"\"\n\nmodel_specs {\n    label_key: \"%s\"\n}\n\nmetrics_specs {\n    metrics {\n        class_name: \"FairnessIndicators\"\n        config: '{ \"thresholds\": [0.25, 0.5, 0.75] }'\n    }\n    metrics {\n        class_name: \"ExampleCount\"\n    }\n}\n\nslicing_specs {}\nslicing_specs {\n    feature_keys: \"%s\"\n}\n\noptions {\n    compute_confidence_intervals { value: False }\n    disabled_outputs{values: \"analysis\"}\n}\n\"\"\" % (LABEL_KEY, GROUP_KEY)\n```\n///\n\n/// html | div[style='clear: both;']\n///\n\n<div class=\"grid cards\" markdown>\n\n-   ![ML Practicum: Fairness in Perspective API using Fairness Indicators](https://www.tensorflow.org/static/responsible_ai/fairness_indicators/images/mlpracticum_480.png)\n\n    ### [ML Practicum: Fairness in Perspective API using Fairness Indicators](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)\n\n    ---\n\n    [Try the Case Study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)\n\n-   ![Fairness Indicators on the TensorFlow blog](images/tf_full_color_primary_icon.svg)\n\n    ### [Fairness Indicators on the TensorFlow blog](https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html)\n\n    ---\n\n    [Read on the TensorFlow blog](https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html)\n\n-   ![Fairness Indicators on GitHub](https://www.tensorflow.org/static/resources/images/github-card-16x9_480.png)\n\n    ### [Fairness Indicators on GitHub](https://github.com/tensorflow/fairness-indicators)\n    ---\n\n    [View on GitHub](https://github.com/tensorflow/fairness-indicators)\n\n-   ![Fairness Indicators on the Google AI Blog](https://www.tensorflow.org/static/responsible_ai/fairness_indicators/images/googleai_720.png)\n\n    ### [Fairness Indicators on the Google AI Blog](https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html)\n    ---\n\n    [Read on Google AI blog](https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html)\n\n-   <iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/6CwzDoE8J4M?si=gIL2KHdj96_SxdVH\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen></iframe>\n\n    ### [Fairness Indicators at Google I/O](https://www.youtube.com/watch?v=6CwzDoE8J4M)\n\n    ---\n\n    [Watch the video](https://www.youtube.com/watch?v=6CwzDoE8J4M)\n\n</div>\n"
  },
  {
    "path": "docs/javascripts/mathjax.js",
    "content": "window.MathJax = {\n  tex: {\n    inlineMath: [[\"\\\\(\", \"\\\\)\"]],\n    displayMath: [[\"\\\\[\", \"\\\\]\"]],\n    processEscapes: true,\n    processEnvironments: true\n  },\n  options: {\n    ignoreHtmlClass: \".*|\",\n    processHtmlClass: \"arithmatex\"\n  }\n};\n\ndocument$.subscribe(() => {\n  MathJax.startup.output.clearCache()\n  MathJax.typesetClear()\n  MathJax.texReset()\n  MathJax.typesetPromise()\n})\n"
  },
  {
    "path": "docs/stylesheets/extra.css",
    "content": ":root {\n  --md-primary-fg-color:        #FFA800;\n  --md-primary-fg-color--light: #CCCCCC;\n  --md-primary-fg-color--dark:  #425066;\n}\n\n.video-wrapper {\n  max-width: 240px;\n  display: flex;\n  flex-direction: row;\n}\n.video-wrapper > iframe {\n  width: 100%;\n  aspect-ratio: 16 / 9;\n}\n\n.buttons-wrapper {\n    flex-wrap: wrap;\n    gap: 1em;\n    display: flex;\n    /* flex-grow: 1; */\n    /* justify-content: center; */\n    /* align-content: center; */\n}\n\n.buttons-wrapper > a {\n    justify-content: center;\n    align-content: center;\n    flex-wrap: nowrap;\n    /* gap: 1em; */\n    align-items: center;\n    text-align: center;\n    flex: 1 1 30%;\n    display: flex;\n}\n\n.md-button > .buttons-content {\n    align-items: center;\n    justify-content: center;\n    display: flex;\n    gap: 1em;\n}\n"
  },
  {
    "path": "docs/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Sxt-9qpNgPxo\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"Phnw6c3-gQ1f\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"aalPefrUUplk\"\n   },\n   \"source\": [\n    \"# FaceSSD Fairness Indicators Example Colab\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"KFRBcGOYgEAI\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://tensorflow.github.io/fairness-indicators/tutorials/Facessd_Fairness_Indicators_Example_Colab\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UZ48WFLwbCL6\"\n   },\n   \"source\": [\n    \"##Overview\\n\",\n    \"\\n\",\n    \"In this activity, you'll use [Fairness Indicators](https://tensorflow.github.io/fairness-indicators) to explore the [FaceSSD predictions on Labeled Faces in the Wild dataset](https://modelcards.withgoogle.com/face-detection). Fairness Indicators is a suite of tools built on top of [TensorFlow Model Analysis](https://tensorflow.github.io/model-analysis/get_started) that enable regular evaluation of fairness metrics in product pipelines.\\n\",\n    \"\\n\",\n    \"##About the Dataset\\n\",\n    \"\\n\",\n    \"In this exercise, you'll work with the FaceSSD prediction dataset, approximately 200k different image predictions and groundtruths generated by FaceSSD API.\\n\",\n    \"\\n\",\n    \"##About the Tools\\n\",\n    \"\\n\",\n    \"[TensorFlow Model Analysis](https://tensorflow.github.io/model_analysis/get_started) is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows users to evaluate their models on large amounts of data in a distributed manner, computing in-graph and other metrics over different slices of data and visualize in notebooks.\\n\",\n    \"\\n\",\n    \"[TensorFlow Data Validation](https://tensorflow.github.io/data-validation/get_started) is one tool you can use to analyze your data. You can use it to find potential problems in your data, such as missing values and data imbalances, that can lead to Fairness disparities.\\n\",\n    \"\\n\",\n    \"With [Fairness Indicators](https://tensorflow.github.io/fairness-indicators/), users will be able to: \\n\",\n    \"\\n\",\n    \"* Evaluate model performance, sliced across defined groups of users\\n\",\n    \"* Feel confident about results with confidence intervals and evaluations at multiple thresholds\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"u33JXdluZ2lG\"\n   },\n   \"source\": [\n    \"# Importing\\n\",\n    \"\\n\",\n    \"Run the following code to install the fairness_indicators library. This package contains the tools we'll be using in this exercise. Restart Runtime may be requested but is not necessary.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"EoRNffG599XP\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!pip install apache_beam\\n\",\n    \"!pip install fairness-indicators\\n\",\n    \"!pip install witwidget\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"B8dlyTyiTe-9\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"import tempfile\\n\",\n    \"import apache_beam as beam\\n\",\n    \"import numpy as np\\n\",\n    \"import pandas as pd\\n\",\n    \"from datetime import datetime\\n\",\n    \"\\n\",\n    \"import tensorflow_hub as hub\\n\",\n    \"import tensorflow as tf\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"import tensorflow_data_validation as tfdv\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.view import widget_view\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor\\n\",\n    \"\\n\",\n    \"from witwidget.notebook.visualization import WitConfigBuilder\\n\",\n    \"from witwidget.notebook.visualization import WitWidget\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"TsplOJGqWCf5\"\n   },\n   \"source\": [\n    \"# Download and Understand the Data\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"vFOQ4AaIcAn2\"\n   },\n   \"source\": [\n    \"[Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) is a public benchmark dataset for face verification, also known as pair matching. LFW contains more than 13,000 images of faces collected from the web.\\n\",\n    \"\\n\",\n    \"We ran FaceSSD predictions on this dataset to predict whether a face is present in a given image. In this Colab, we will slice data according to gender to observe if there are any significant differences between model performance for different gender groups.\\n\",\n    \"\\n\",\n    \"If there is more than one face in an image, gender is labeled as \\\"MISSING\\\".\\n\",\n    \"\\n\",\n    \"We've hosted the dataset on Google Cloud Platform for convenience. Run the following code to download the data from GCP, the data will take about a minute to download and analyze.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"NdLBi6tN5i7I\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"data_location = tf.keras.utils.get_file('lfw_dataset.tf', 'https://storage.googleapis.com/facessd_dataset/lfw_dataset.tfrecord')\\n\",\n    \"\\n\",\n    \"stats = tfdv.generate_statistics_from_tfrecord(data_location=data_location)\\n\",\n    \"tfdv.visualize_statistics(stats)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"cNODEwE5x7Uo\"\n   },\n   \"source\": [\n    \"# Defining Constants\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"ZF4NO87uFxdQ\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"BASE_DIR = tempfile.gettempdir()\\n\",\n    \"\\n\",\n    \"tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\\n\",\n    \"\\n\",\n    \"compute_confidence_intervals = True\\n\",\n    \"\\n\",\n    \"slice_key = 'object/groundtruth/Gender'\\n\",\n    \"label_key = 'object/groundtruth/face'\\n\",\n    \"prediction_key = 'object/prediction/face'\\n\",\n    \"\\n\",\n    \"feature_map = {\\n\",\n    \"    slice_key:\\n\",\n    \"        tf.io.FixedLenFeature([], tf.string, default_value=['none']),\\n\",\n    \"    label_key:\\n\",\n    \"        tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]),\\n\",\n    \"    prediction_key:\\n\",\n    \"        tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]),\\n\",\n    \"}\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"gVLHwuhEyI8R\"\n   },\n   \"source\": [\n    \"# Model Agnostic Config for TFMA\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"ej1nGCZSyJIK\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model_agnostic_config = agnostic_predict.ModelAgnosticConfig(\\n\",\n    \"    label_keys=[label_key],\\n\",\n    \"    prediction_keys=[prediction_key],\\n\",\n    \"    feature_spec=feature_map)\\n\",\n    \"\\n\",\n    \"model_agnostic_extractors = [\\n\",\n    \"    model_agnostic_extractor.ModelAgnosticExtractor(\\n\",\n    \"        model_agnostic_config=model_agnostic_config, desired_batch_size=3),\\n\",\n    \"    tfma.extractors.slice_key_extractor.SliceKeyExtractor(\\n\",\n    \"          [tfma.slicer.SingleSliceSpec(),\\n\",\n    \"           tfma.slicer.SingleSliceSpec(columns=[slice_key])])\\n\",\n    \"]\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"wqkk9SkvyVkR\"\n   },\n   \"source\": [\n    \"# Fairness Callbacks and Computing Fairness Metrics\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"A0icrlliBCOb\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Helper class for counting examples in beam PCollection\\n\",\n    \"class CountExamples(beam.CombineFn):\\n\",\n    \"    def __init__(self, message):\\n\",\n    \"      self.message = message\\n\",\n    \"\\n\",\n    \"    def create_accumulator(self):\\n\",\n    \"      return 0\\n\",\n    \"\\n\",\n    \"    def add_input(self, current_sum, element):\\n\",\n    \"      return current_sum + 1\\n\",\n    \"\\n\",\n    \"    def merge_accumulators(self, accumulators): \\n\",\n    \"      return sum(accumulators)\\n\",\n    \"\\n\",\n    \"    def extract_output(self, final_sum):\\n\",\n    \"      if final_sum:\\n\",\n    \"        print(\\\"%s: %d\\\"%(self.message, final_sum))\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"mRQjdjp9yVv2\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"metrics_callbacks = [\\n\",\n    \"  tfma.post_export_metrics.fairness_indicators(\\n\",\n    \"      thresholds=[0.1, 0.3, 0.5, 0.7, 0.9],\\n\",\n    \"      labels_key=label_key,\\n\",\n    \"      target_prediction_keys=[prediction_key]),\\n\",\n    \"  tfma.post_export_metrics.auc(\\n\",\n    \"      curve='PR',\\n\",\n    \"      labels_key=label_key,\\n\",\n    \"      target_prediction_keys=[prediction_key]),\\n\",\n    \"]\\n\",\n    \"\\n\",\n    \"eval_shared_model = tfma.types.EvalSharedModel(\\n\",\n    \"    add_metrics_callbacks=metrics_callbacks,\\n\",\n    \"    construct_fn=model_agnostic_evaluate_graph.make_construct_fn(\\n\",\n    \"        add_metrics_callbacks=metrics_callbacks,\\n\",\n    \"        config=model_agnostic_config))\\n\",\n    \"\\n\",\n    \"with beam.Pipeline() as pipeline:\\n\",\n    \"  # Read data.\\n\",\n    \"  data = (\\n\",\n    \"      pipeline\\n\",\n    \"      | 'ReadData' >> beam.io.ReadFromTFRecord(data_location))\\n\",\n    \"\\n\",\n    \"  # Count all examples.\\n\",\n    \"  data_count = (\\n\",\n    \"      data | 'Count number of examples' >> beam.CombineGlobally(\\n\",\n    \"          CountExamples('Before filtering \\\"Gender:MISSING\\\"')))\\n\",\n    \"\\n\",\n    \"  # If there are more than one face in image, the gender feature is 'MISSING'\\n\",\n    \"  # and we are filtering that image out.\\n\",\n    \"  def filter_missing_gender(element):\\n\",\n    \"    example = tf.train.Example.FromString(element)\\n\",\n    \"    if example.features.feature[slice_key].bytes_list.value[0] != b'MISSING':\\n\",\n    \"      yield element\\n\",\n    \"\\n\",\n    \"  filtered_data = (\\n\",\n    \"      data\\n\",\n    \"      | 'Filter Missing Gender' >> beam.ParDo(filter_missing_gender))\\n\",\n    \"\\n\",\n    \"  # Count after filtering \\\"Gender:MISSING\\\".\\n\",\n    \"  filtered_data_count = (\\n\",\n    \"      filtered_data | 'Count number of examples after filtering'\\n\",\n    \"      >> beam.CombineGlobally(\\n\",\n    \"          CountExamples('After filtering \\\"Gender:MISSING\\\"')))\\n\",\n    \"\\n\",\n    \"  # Because LFW data set has always faces by default, we are adding\\n\",\n    \"  # labels as 1.0 for all images.\\n\",\n    \"  def add_face_groundtruth(element):\\n\",\n    \"    example = tf.train.Example.FromString(element)\\n\",\n    \"    example.features.feature[label_key].float_list.value[:] = [1.0]\\n\",\n    \"    yield example.SerializeToString()\\n\",\n    \"\\n\",\n    \"  final_data = (\\n\",\n    \"      filtered_data\\n\",\n    \"      | 'Add Face Groundtruth' >> beam.ParDo(add_face_groundtruth))\\n\",\n    \"\\n\",\n    \"  # Run TFMA.\\n\",\n    \"  _ = (\\n\",\n    \"      final_data\\n\",\n    \"      | 'ExtractEvaluateAndWriteResults' >>\\n\",\n    \"       tfma.ExtractEvaluateAndWriteResults(\\n\",\n    \"                 eval_shared_model=eval_shared_model,\\n\",\n    \"                 compute_confidence_intervals=compute_confidence_intervals,\\n\",\n    \"                 output_path=tfma_eval_result_path,\\n\",\n    \"                 extractors=model_agnostic_extractors))\\n\",\n    \"\\n\",\n    \"eval_result = tfma.load_eval_result(output_path=tfma_eval_result_path)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ktlASJQIzE3l\"\n   },\n   \"source\": [\n    \"# Render Fairness Indicators\\n\",\n    \"\\n\",\n    \"Render the Fairness Indicators widget with the exported evaluation results.\\n\",\n    \"\\n\",\n    \"Below you will see bar charts displaying performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the drop down menus at the top of the visualization.\\n\",\n    \"\\n\",\n    \"A relevant metric for this use case is true positive rate, also known as recall. Use the selector on the left hand side to choose the graph for true_positive_rate. These metric values match the values displayed on the [model card](https://modelcards.withgoogle.com/face-detection).\\n\",\n    \"\\n\",\n    \"For some photos, gender is labeled as young instead of male or female, if the person in the photo is too young to be accurately annotated.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"JNaNhTCTAMHm\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"widget_view.render_fairness_indicator(eval_result=eval_result,\\n\",\n    \"                                      slicing_column=slice_key)\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"accelerator\": \"GPU\",\n  \"colab\": {\n   \"collapsed_sections\": [\n    \"Sxt-9qpNgPxo\"\n   ],\n   \"name\": \"Facessd Fairness Indicators Example Colab.ipynb\",\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_Example_Colab.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Tce3stUlHN0L\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"tuOe1ymfHZPu\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"aalPefrUUplk\"\n   },\n   \"source\": [\n    \"# Introduction to Fairness Indicators\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"MfBg1C5NB3X0\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://tensorflow.github.io/fairness-indicators/tutorials/Fairness_Indicators_Example_Colab\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"YWcPbUNg1yez\"\n   },\n   \"source\": [\n    \"## Overview\\n\",\n    \"\\n\",\n    \"Fairness Indicators is a suite of tools built on top of [TensorFlow Model Analysis (TFMA)](https://tensorflow.github.io/model-analysis/get_started) that enable regular evaluation of fairness metrics in product pipelines. TFMA is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows you to evaluate your models on large amounts of data in a distributed manner, compute in-graph and other metrics over different slices of data, and visualize them in notebooks. \\n\",\n    \"\\n\",\n    \"Fairness Indicators is packaged with [TensorFlow Data Validation (TFDV)](https://tensorflow.github.io/data-validation/get_started) and the [What-If Tool](https://pair-code.github.io/what-if-tool/). Using Fairness Indicators allows you to: \\n\",\n    \"\\n\",\n    \"* Evaluate model performance, sliced across defined groups of users\\n\",\n    \"* Gain confidence about results with confidence intervals and evaluations at multiple thresholds\\n\",\n    \"* Evaluate the distribution of datasets\\n\",\n    \"* Dive deep into individual slices to explore root causes and opportunities for improvement\\n\",\n    \"\\n\",\n    \"In this notebook, you will use Fairness Indicators to fix fairness issues in a model you train using the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification). Watch this [video](https://www.youtube.com/watch?v=pHT-ImFXPQo) for more details and context on the real-world scenario this is based on which is also one of primary motivations for creating Fairness Indicators.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"GjuCFktB2IJW\"\n   },\n   \"source\": [\n    \"## Dataset\\n\",\n    \"\\n\",\n    \"In this notebook, you will work with the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification), approximately 2 million public comments made public by the [Civil Comments platform](https://medium.com/@aja_15265/saying-goodbye-to-civil-comments-41859d3a2b1d) in 2017 for ongoing research. This effort was sponsored by [Jigsaw](https://jigsaw.google.com/), who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.\\n\",\n    \"\\n\",\n    \"Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"u33JXdluZ2lG\"\n   },\n   \"source\": [\n    \"## Setup\\n\",\n    \"\\n\",\n    \"Install `fairness-indicators` and `witwidget`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"EoRNffG599XP\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!pip install -q -U pip==20.2\\n\",\n    \"\\n\",\n    \"!pip install -q fairness-indicators\\n\",\n    \"!pip install -q witwidget\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"alYUSbyv59j5\"\n   },\n   \"source\": [\n    \"You must restart the Colab runtime after installing. Select **Runtime > Restart** runtime from the Colab menu.\\n\",\n    \"\\n\",\n    \"Do not proceed with the rest of this tutorial without first restarting the runtime.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"RbRUqXDm6f1N\"\n   },\n   \"source\": [\n    \"Import all other required libraries.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"B8dlyTyiTe-9\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"import tempfile\\n\",\n    \"import apache_beam as beam\\n\",\n    \"import numpy as np\\n\",\n    \"import pandas as pd\\n\",\n    \"from datetime import datetime\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"from google.protobuf import text_format\\n\",\n    \"\\n\",\n    \"import tensorflow_hub as hub\\n\",\n    \"import tensorflow as tf\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"import tensorflow_data_validation as tfdv\\n\",\n    \"\\n\",\n    \"from tfx_bsl.tfxio import tensor_adapter\\n\",\n    \"from tfx_bsl.tfxio import tf_example_record\\n\",\n    \"\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.view import widget_view\\n\",\n    \"\\n\",\n    \"from fairness_indicators.tutorial_utils import util\\n\",\n    \"\\n\",\n    \"from witwidget.notebook.visualization import WitConfigBuilder\\n\",\n    \"from witwidget.notebook.visualization import WitWidget\\n\",\n    \"\\n\",\n    \"from tensorflow_metadata.proto.v0 import schema_pb2\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"TsplOJGqWCf5\"\n   },\n   \"source\": [\n    \"## Download and analyze the data\\n\",\n    \"\\n\",\n    \"By default, this notebook downloads a preprocessed version of this dataset, but you may use the original dataset and re-run the processing steps if desired. In the original dataset, each comment is labeled with the percentage of raters who believed that a comment corresponds to a particular identity. For example, a comment might be labeled with the following: { male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8, homosexual_gay_or_lesbian: 1.0 } The processing step groups identity by category (gender, sexual_orientation, etc.) and removes identities with a score less than 0.5. So the example above would be converted to the following: of raters who believed that a comment corresponds to a particular identity. For example, the comment would be labeled with the following: { gender: [female], sexual_orientation: [heterosexual, homosexual_gay_or_lesbian] }\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"qmt4gkBFRBD2\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"download_original_data = False #@param {type:\\\"boolean\\\"}\\n\",\n    \"\\n\",\n    \"if download_original_data:\\n\",\n    \"  train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',\\n\",\n    \"                                          'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')\\n\",\n    \"  validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',\\n\",\n    \"                                             'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')\\n\",\n    \"\\n\",\n    \"  # The identity terms list will be grouped together by their categories\\n\",\n    \"  # (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column,\\n\",\n    \"  # text column and label column will be kept after processing.\\n\",\n    \"  train_tf_file = util.convert_comments_data(train_tf_file)\\n\",\n    \"  validate_tf_file = util.convert_comments_data(validate_tf_file)\\n\",\n    \"\\n\",\n    \"else:\\n\",\n    \"  train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',\\n\",\n    \"                                          'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')\\n\",\n    \"  validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',\\n\",\n    \"                                             'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"vFOQ4AaIcAn2\"\n   },\n   \"source\": [\n    \"Use TFDV to analyze the data and find potential problems in it, such as missing values and data imbalances, that can lead to fairness disparities.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"NdLBi6tN5i7I\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file)\\n\",\n    \"tfdv.visualize_statistics(stats)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"AS9QiA96GXDE\"\n   },\n   \"source\": [\n    \"TFDV shows that there are some significant imbalances in the data which could lead to biased model outcomes. \\n\",\n    \"\\n\",\n    \"* The toxicity label (the value predicted by the model) is unbalanced. Only 8% of the examples in the training set are toxic, which means that a classifier could get 92% accuracy by predicting that all comments are non-toxic.\\n\",\n    \"\\n\",\n    \"* In the fields relating to identity terms, only 6.6k out of the 1.08 million (0.61%) training examples deal with homosexuality, and those related to bisexuality are even more rare. This indicates that performance on these slices may suffer due to lack of training data.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"9ekzb7vVnPCc\"\n   },\n   \"source\": [\n    \"## Prepare the data\\n\",\n    \"\\n\",\n    \"Define a feature map to parse the data. Each example will have a label, comment text, and identity features `sexual orientation`, `gender`, `religion`, `race`, and `disability` that are associated with the text.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"n4_nXQDykX6W\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"BASE_DIR = tempfile.gettempdir()\\n\",\n    \"\\n\",\n    \"TEXT_FEATURE = 'comment_text'\\n\",\n    \"LABEL = 'toxicity'\\n\",\n    \"FEATURE_MAP = {\\n\",\n    \"    # Label:\\n\",\n    \"    LABEL: tf.io.FixedLenFeature([], tf.float32),\\n\",\n    \"    # Text:\\n\",\n    \"    TEXT_FEATURE:  tf.io.FixedLenFeature([], tf.string),\\n\",\n    \"\\n\",\n    \"    # Identities:\\n\",\n    \"    'sexual_orientation':tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'gender':tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'religion':tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'race':tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'disability':tf.io.VarLenFeature(tf.string),\\n\",\n    \"}\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"1B1ROCM__y8C\"\n   },\n   \"source\": [\n    \"Next, set up an input function to feed data into the model. Add a weight column to each example and upweight the toxic examples to account for the class imbalance identified by the TFDV. Use only identity features during the evaluation phase, as only the comments are fed into the model during training.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"YwoC-dzEDid3\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def train_input_fn():\\n\",\n    \"  def parse_function(serialized):\\n\",\n    \"    parsed_example = tf.io.parse_single_example(\\n\",\n    \"        serialized=serialized, features=FEATURE_MAP)\\n\",\n    \"    # Adds a weight column to deal with unbalanced classes.\\n\",\n    \"    parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1)\\n\",\n    \"    return (parsed_example,\\n\",\n    \"            parsed_example[LABEL])\\n\",\n    \"  train_dataset = tf.data.TFRecordDataset(\\n\",\n    \"      filenames=[train_tf_file]).map(parse_function).batch(512)\\n\",\n    \"  return train_dataset\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"mfbgerCsEOmN\"\n   },\n   \"source\": [\n    \"## Train the model\\n\",\n    \"\\n\",\n    \"Create and train a deep learning model on the data.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"JaGvNrVijfws\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model_dir = os.path.join(BASE_DIR, 'train', datetime.now().strftime(\\n\",\n    \"    \\\"%Y%m%d-%H%M%S\\\"))\\n\",\n    \"\\n\",\n    \"embedded_text_feature_column = hub.text_embedding_column(\\n\",\n    \"    key=TEXT_FEATURE,\\n\",\n    \"    module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')\\n\",\n    \"\\n\",\n    \"classifier = tf.estimator.DNNClassifier(\\n\",\n    \"    hidden_units=[500, 100],\\n\",\n    \"    weight_column='weight',\\n\",\n    \"    feature_columns=[embedded_text_feature_column],\\n\",\n    \"    optimizer=tf.keras.optimizers.legacy.Adagrad(learning_rate=0.003),\\n\",\n    \"    loss_reduction=tf.losses.Reduction.SUM,\\n\",\n    \"    n_classes=2,\\n\",\n    \"    model_dir=model_dir)\\n\",\n    \"\\n\",\n    \"classifier.train(input_fn=train_input_fn, steps=1000)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jTPqije9Eg5b\"\n   },\n   \"source\": [\n    \"## Analyze the model\\n\",\n    \"\\n\",\n    \"After obtaining the trained model, analyze it to compute fairness metrics using TFMA and Fairness Indicators. Begin by exporting the model as a [SavedModel](https://www.tensorflow.org/guide/saved_model). \"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"-vRc-Jyp8dRm\"\n   },\n   \"source\": [\n    \"### Export SavedModel\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"QLjiy5VCzlRw\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def eval_input_receiver_fn():\\n\",\n    \"  serialized_tf_example = tf.compat.v1.placeholder(\\n\",\n    \"      dtype=tf.string, shape=[None], name='input_example_placeholder')\\n\",\n    \"\\n\",\n    \"  # This *must* be a dictionary containing a single key 'examples', which\\n\",\n    \"  # points to the input placeholder.\\n\",\n    \"  receiver_tensors = {'examples': serialized_tf_example}\\n\",\n    \"\\n\",\n    \"  features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP)\\n\",\n    \"  features['weight'] = tf.ones_like(features[LABEL])\\n\",\n    \"\\n\",\n    \"  return tfma.export.EvalInputReceiver(\\n\",\n    \"    features=features,\\n\",\n    \"    receiver_tensors=receiver_tensors,\\n\",\n    \"    labels=features[LABEL])\\n\",\n    \"\\n\",\n    \"tfma_export_dir = tfma.export.export_eval_savedmodel(\\n\",\n    \"  estimator=classifier,\\n\",\n    \"  export_dir_base=os.path.join(BASE_DIR, 'tfma_eval_model'),\\n\",\n    \"  eval_input_receiver_fn=eval_input_receiver_fn)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"3j8ODcee8rQ8\"\n   },\n   \"source\": [\n    \"### Compute Fairness Metrics\\n\",\n    \"\\n\",\n    \"Select the identity to compute metrics for and whether to run with confidence intervals using the dropdown in the panel on the right.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"7shDmJbx9mqa\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Fairness Indicators Computation Options\\n\",\n    \"tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\\n\",\n    \"\\n\",\n    \"#@markdown Modify the slice_selection for experiments on other identities.\\n\",\n    \"slice_selection = 'sexual_orientation' #@param [\\\"sexual_orientation\\\", \\\"gender\\\", \\\"religion\\\", \\\"race\\\", \\\"disability\\\"]\\n\",\n    \"print(f'Slice selection: {slice_selection}')\\n\",\n    \"#@markdown Confidence Intervals can help you make better decisions regarding your data, but as it requires computing multiple resamples, is slower particularly in the colab environment that cannot take advantage of parallelization.\\n\",\n    \"compute_confidence_intervals = False #@param {type:\\\"boolean\\\"}\\n\",\n    \"print(f'Compute confidence intervals: {compute_confidence_intervals}')\\n\",\n    \"\\n\",\n    \"# Define slices that you want the evaluation to run on.\\n\",\n    \"eval_config_pbtxt = \\\"\\\"\\\"\\n\",\n    \"    model_specs {\\n\",\n    \"      label_key: \\\"%s\\\"\\n\",\n    \"    }\\n\",\n    \"    metrics_specs {\\n\",\n    \"      metrics {\\n\",\n    \"        class_name: \\\"FairnessIndicators\\\"\\n\",\n    \"        config: '{ \\\"thresholds\\\": [0.1, 0.3, 0.5, 0.7, 0.9] }'\\n\",\n    \"      }\\n\",\n    \"    }\\n\",\n    \"    slicing_specs {}  # overall slice\\n\",\n    \"    slicing_specs {\\n\",\n    \"      feature_keys: [\\\"%s\\\"]\\n\",\n    \"    }\\n\",\n    \"    options {\\n\",\n    \"      compute_confidence_intervals { value: %s }\\n\",\n    \"      disabled_outputs { values: \\\"analysis\\\" }\\n\",\n    \"    }\\n\",\n    \"  \\\"\\\"\\\" % (LABEL, slice_selection, compute_confidence_intervals)\\n\",\n    \"eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())\\n\",\n    \"eval_shared_model = tfma.default_eval_shared_model(\\n\",\n    \"    eval_saved_model_path=tfma_export_dir)\\n\",\n    \"\\n\",\n    \"schema = text_format.Parse(\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"        tensor_representation_group {\\n\",\n    \"          key: \\\"\\\"\\n\",\n    \"          value {\\n\",\n    \"            tensor_representation {\\n\",\n    \"              key: \\\"comment_text\\\"\\n\",\n    \"              value {\\n\",\n    \"                dense_tensor {\\n\",\n    \"                  column_name: \\\"comment_text\\\"\\n\",\n    \"                  shape {}\\n\",\n    \"                }\\n\",\n    \"              }\\n\",\n    \"            }\\n\",\n    \"          }\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"comment_text\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"toxicity\\\"\\n\",\n    \"          type: FLOAT\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"sexual_orientation\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"gender\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"religion\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"race\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"disability\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        \\\"\\\"\\\", schema_pb2.Schema())\\n\",\n    \"tfxio = tf_example_record.TFExampleRecord(\\n\",\n    \"    file_pattern=validate_tf_file,\\n\",\n    \"    schema=schema,\\n\",\n    \"    raw_record_column_name=tfma.ARROW_INPUT_COLUMN)\\n\",\n    \"tensor_adapter_config = tensor_adapter.TensorAdapterConfig(\\n\",\n    \"    arrow_schema=tfxio.ArrowSchema(),\\n\",\n    \"    tensor_representations=tfxio.TensorRepresentations())\\n\",\n    \"\\n\",\n    \"with beam.Pipeline() as pipeline:\\n\",\n    \"  (pipeline\\n\",\n    \"    | 'ReadFromTFRecordToArrow' >> tfxio.BeamSource()\\n\",\n    \"    | 'ExtractEvaluateAndWriteResults' >> tfma.ExtractEvaluateAndWriteResults(\\n\",\n    \"        eval_config=eval_config,\\n\",\n    \"        eval_shared_model=eval_shared_model,\\n\",\n    \"        output_path=tfma_eval_result_path,\\n\",\n    \"        tensor_adapter_config=tensor_adapter_config))\\n\",\n    \"\\n\",\n    \"eval_result = tfma.load_eval_result(output_path=tfma_eval_result_path)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jtDpTBPeRw2d\"\n   },\n   \"source\": [\n    \"### Visualize data using the What-if Tool\\n\",\n    \"\\n\",\n    \"In this section, you'll use the What-If Tool's interactive visual interface to explore and manipulate data at a micro-level.\\n\",\n    \"\\n\",\n    \"Each point on the scatter plot on the right-hand panel represents one of the examples in the subset loaded into the tool. Click on one of the points to see details about this particular example in the left-hand panel. The comment text, ground truth toxicity, and applicable identities are shown. At the bottom of this left-hand panel, you see the inference results from the model you just trained.\\n\",\n    \"\\n\",\n    \"Modify the text of the example and then click the **Run inference** button to view how your changes caused the perceived toxicity prediction to change.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"wtjZo4BDlV1m\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"DEFAULT_MAX_EXAMPLES = 1000\\n\",\n    \"\\n\",\n    \"# Load 100000 examples in memory. When first rendered, \\n\",\n    \"# What-If Tool should only display 1000 of these due to browser constraints.\\n\",\n    \"def wit_dataset(file, num_examples=100000):\\n\",\n    \"  dataset = tf.data.TFRecordDataset(\\n\",\n    \"      filenames=[file]).take(num_examples)\\n\",\n    \"  return [tf.train.Example.FromString(d.numpy()) for d in dataset]\\n\",\n    \"\\n\",\n    \"wit_data = wit_dataset(train_tf_file)\\n\",\n    \"config_builder = WitConfigBuilder(wit_data[:DEFAULT_MAX_EXAMPLES]).set_estimator_and_feature_spec(\\n\",\n    \"    classifier, FEATURE_MAP).set_label_vocab(['non-toxicity', LABEL]).set_target_feature(LABEL)\\n\",\n    \"wit = WitWidget(config_builder)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ktlASJQIzE3l\"\n   },\n   \"source\": [\n    \"## Render Fairness Indicators\\n\",\n    \"\\n\",\n    \"Render the Fairness Indicators widget with the exported evaluation results.\\n\",\n    \"\\n\",\n    \"Below you will see bar charts displaying performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the dropdown menus at the top of the visualization. \\n\",\n    \"\\n\",\n    \"The Fairness Indicator widget is integrated with the What-If Tool rendered above. If you select one slice of the data in the bar chart, the What-If Tool will update to show you examples from the selected slice. When the data reloads in the What-If Tool above, try modifying **Color By** to **toxicity**. This can give you a visual understanding of the toxicity balance of examples by slice.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"JNaNhTCTAMHm\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"event_handlers={'slice-selected':\\n\",\n    \"                wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)}\\n\",\n    \"widget_view.render_fairness_indicator(eval_result=eval_result,\\n\",\n    \"                                      slicing_column=slice_selection,\\n\",\n    \"                                      event_handlers=event_handlers\\n\",\n    \"                                      )\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"nRuZsLr6V_fY\"\n   },\n   \"source\": [\n    \"With this particular dataset and task, systematically higher false positive and false negative rates for certain identities can lead to negative consequences. For example, in a content moderation system, a higher-than-overall false positive rate for a certain group can lead to those voices being silenced. Thus, it is important to regularly evaluate these types of criteria as you develop and improve models, and utilize tools such as Fairness Indicators, TFDV, and WIT to help illuminate potential problems. Once you've identified fairness issues, you can experiment with new data sources, data balancing, or other techniques to improve performance on underperforming groups.\\n\",\n    \"\\n\",\n    \"See [here](../../guide/guidance) for more information and guidance on how to use Fairness Indicators.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"wCMEMtGfx0Ti\"\n   },\n   \"source\": [\n    \"## Use fairness evaluation results\\n\",\n    \"\\n\",\n    \"The [`eval_result`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult) object, rendered above in `render_fairness_indicator()`, has its own API that you can leverage to read TFMA results into your programs.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"z6stkMLwyfza\"\n   },\n   \"source\": [\n    \"### Get evaluated slices and metrics\\n\",\n    \"\\n\",\n    \"Use [`get_slice_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_slice_names) and [`get_metric_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metric_names) to get the evaluated slices and metrics, respectively.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"eXrt7SdZyzWD\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"pp = pprint.PrettyPrinter()\\n\",\n    \"\\n\",\n    \"print(\\\"Slices:\\\")\\n\",\n    \"pp.pprint(eval_result.get_slice_names())\\n\",\n    \"print(\\\"\\\\nMetrics:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metric_names())\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ctAvudY2zUu4\"\n   },\n   \"source\": [\n    \"Use [`get_metrics_for_slice()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResultget_metrics_for_slice) to get the metrics for a particular slice as a dictionary mapping metric names to [metric values](https://github.com/tensorflow/model-analysis/blob/cdb6790dcd7a37c82afb493859b3ef4898963fee/tensorflow_model_analysis/proto/metrics_for_slice.proto#L194).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"zjCxZGHmzF0R\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"baseline_slice = ()\\n\",\n    \"heterosexual_slice = (('sexual_orientation', 'heterosexual'),)\\n\",\n    \"\\n\",\n    \"print(\\\"Baseline metric values:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metrics_for_slice(baseline_slice))\\n\",\n    \"print(\\\"\\\\nHeterosexual metric values:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metrics_for_slice(heterosexual_slice))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UDo3LhoR0Rq1\"\n   },\n   \"source\": [\n    \"Use [`get_metrics_for_all_slices()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_all_slices) to get the metrics for all slices as a dictionary mapping each slice to the corresponding metrics dictionary you obtain from running `get_metrics_for_slice()` on it.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"96N2l2xI0fZd\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"pp.pprint(eval_result.get_metrics_for_all_slices())\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"accelerator\": \"GPU\",\n  \"colab\": {\n   \"collapsed_sections\": [],\n   \"name\": \"Fairness Indicators Example Colab.ipynb\",\n   \"private_outputs\": true,\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Bfrh3DUze0QN\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"sx-jnufYfcJG\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"s1bQihY6-Y4N\"\n   },\n   \"source\": [\n    \"# Pandas DataFrame to Fairness Indicators Case Study\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"XHTjeiUMeolM\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Pandas_Case_Study\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ay80altXzvgZ\"\n   },\n   \"source\": [\n    \"## Case Study Overview\\n\",\n    \"In this case study we will apply [TensorFlow Model Analysis](https://tensorflow.github.io/model-analysis/get_started) and [Fairness Indicators](https://tensorflow.github.io/fairness-indicators) to evaluate data stored as a Pandas DataFrame, where each row contains ground truth labels, various features, and a model prediction. We will show how this workflow can be used to spot potential fairness concerns, independent of the framework one used to construct and train the model. As in this case study, we can analyze the results from any machine learning framework (e.g. TensorFlow, JAX, etc) once they are converted to a Pandas DataFrame.\\n\",\n    \" \\n\",\n    \"For this exercise, we will leverage the Deep Neural Network (DNN) model that was developed in the [Shape Constraints for Ethics with Tensorflow Lattice](https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints_for_ethics.ipynb#scrollTo=uc0VwsT5nvQi) case study using the Law School Admissions dataset from the Law School Admissions Council (LSAC). This classifier attempts to predict whether or not a student will pass the bar, based on their Law School Admission Test (LSAT) score and undergraduate GPA.\\n\",\n    \"\\n\",\n    \"## LSAC Dataset\\n\",\n    \"The dataset used within this case study was originally collected for a study called '[LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series](https://eric.ed.gov/?id=ED469370)' by Linda Wightman in 1998. The dataset is currently hosted [here](http://www.seaphe.org/databases.php).\\n\",\n    \"\\n\",\n    \"*   **dnn_bar_pass_prediction**: The LSAT prediction from the DNN model.\\n\",\n    \"*   **gender**: Gender of the student.\\n\",\n    \"*   **lsat**: LSAT score received by the student.\\n\",\n    \"*   **pass_bar**: Ground truth label indicating whether or not the student eventually passed the bar.\\n\",\n    \"*   **race**: Race of the student.\\n\",\n    \"*   **ugpa**: A student's undergraduate GPA.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"Ob01ASKqixfw\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!pip install -q -U pip==20.2\\n\",\n    \"\\n\",\n    \"!pip install -q -U \\\\\\n\",\n    \"  tensorflow-model-analysis==0.48.0 \\\\\\n\",\n    \"  tensorflow-data-validation==1.17.0 \\\\\\n\",\n    \"  tfx-bsl==1.17.1\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"tnxSvgkaSEIj\"\n   },\n   \"source\": [\n    \"## Importing required packages:\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"0q8cTfpTkEMP\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"import tempfile\\n\",\n    \"import pandas as pd\\n\",\n    \"import six.moves.urllib as urllib\\n\",\n    \"import pprint\\n\",\n    \"\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"from google.protobuf import text_format\\n\",\n    \"\\n\",\n    \"import tensorflow as tf\\n\",\n    \"tf.compat.v1.enable_v2_behavior()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"b8kWW3t4-eS1\"\n   },\n   \"source\": [\n    \"## Download the data and explore the initial dataset.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"wMZJtgj0qJ0x\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Download the LSAT dataset and setup the required filepaths.\\n\",\n    \"_DATA_ROOT = tempfile.mkdtemp(prefix='lsat-data')\\n\",\n    \"_DATA_PATH = 'https://storage.googleapis.com/lawschool_dataset/bar_pass_prediction.csv'\\n\",\n    \"_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'bar_pass_prediction.csv')\\n\",\n    \"\\n\",\n    \"data = urllib.request.urlopen(_DATA_PATH)\\n\",\n    \"\\n\",\n    \"_LSAT_DF = pd.read_csv(data)\\n\",\n    \"\\n\",\n    \"# To simpliy the case study, we will only use the columns that will be used for\\n\",\n    \"# our model.\\n\",\n    \"_COLUMN_NAMES = [\\n\",\n    \"  'dnn_bar_pass_prediction',\\n\",\n    \"  'gender',\\n\",\n    \"  'lsat',\\n\",\n    \"  'pass_bar',\\n\",\n    \"  'race1',\\n\",\n    \"  'ugpa',\\n\",\n    \"]\\n\",\n    \"\\n\",\n    \"_LSAT_DF.dropna()\\n\",\n    \"_LSAT_DF['gender'] = _LSAT_DF['gender'].astype(str)\\n\",\n    \"_LSAT_DF['race1'] = _LSAT_DF['race1'].astype(str)\\n\",\n    \"_LSAT_DF = _LSAT_DF[_COLUMN_NAMES]\\n\",\n    \"\\n\",\n    \"_LSAT_DF.head()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"GyeVg2s7-wlB\"\n   },\n   \"source\": [\n    \"## Configure Fairness Indicators.\\n\",\n    \"There are several parameters that you’ll need to take into account when using Fairness Indicators with a DataFrame \\n\",\n    \"\\n\",\n    \"*   Your input DataFrame must contain a prediction column and label column from your model. By default Fairness Indicators will look for a prediction column called `prediction` and a label column called `label` within your DataFrame.\\n\",\n    \"   *   If either of these values are not found a KeyError will be raised.\\n\",\n    \"\\n\",\n    \"*   In addition to a DataFrame, you’ll also need to include an `eval_config` that should include the metrics to compute, slices to compute the metrics on, and the column names for example labels and predictions. \\n\",\n    \"   *   `metrics_specs` will set the metrics to compute. The `FairnessIndicators` metric will be required to render the fairness metrics and you can see a list of additional optional metrics [here](https://tensorflow.github.io/model-analysis/metrics).\\n\",\n    \"\\n\",\n    \"   *   `slicing_specs` is an optional slicing parameter to specify what feature you’re interested in investigating. Within this case study race1 is used, however you can also set this value to another feature (for example gender in the context of this DataFrame). If `slicing_specs` is not provided all features will be included.\\n\",\n    \"   *   If your DataFrame includes a label or prediction column that is different from the default `prediction` or `label`, you can configure the `label_key` and `prediction_key` to a new value.\\n\",\n    \"\\n\",\n    \"*   If `output_path` is not specified a temporary directory will be created.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"53caFasB5V9p\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Specify Fairness Indicators in eval_config.\\n\",\n    \"eval_config = text_format.Parse(\\\"\\\"\\\"\\n\",\n    \"  model_specs {\\n\",\n    \"    prediction_key: 'dnn_bar_pass_prediction',\\n\",\n    \"    label_key: 'pass_bar'\\n\",\n    \"  }\\n\",\n    \"  metrics_specs {\\n\",\n    \"    metrics {class_name: \\\"AUC\\\"}\\n\",\n    \"    metrics {\\n\",\n    \"      class_name: \\\"FairnessIndicators\\\"\\n\",\n    \"      config: '{\\\"thresholds\\\": [0.50, 0.90]}'\\n\",\n    \"    }\\n\",\n    \"  }\\n\",\n    \"  slicing_specs {\\n\",\n    \"    feature_keys: 'race1'\\n\",\n    \"  }\\n\",\n    \"  slicing_specs {}\\n\",\n    \"  \\\"\\\"\\\", tfma.EvalConfig())\\n\",\n    \"\\n\",\n    \"# Run TensorFlow Model Analysis.\\n\",\n    \"eval_result = tfma.analyze_raw_data(\\n\",\n    \"  data=_LSAT_DF,\\n\",\n    \"  eval_config=eval_config,\\n\",\n    \"  output_path=_DATA_ROOT)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"KD96mw0e--DE\"\n   },\n   \"source\": [\n    \"## Explore model performance with Fairness Indicators.\\n\",\n    \"\\n\",\n    \"After running Fairness Indicators, we can visualize different metrics that we selected to analyze our models performance. Within this case study we’ve included Fairness Indicators and arbitrarily picked AUC.\\n\",\n    \"\\n\",\n    \"When we first look at the overall AUC for each race slice we can see a slight discrepancy in model performance, but nothing that is arguably alarming.\\n\",\n    \"\\n\",\n    \"*   **Asian**: 0.58\\n\",\n    \"*   **Black**: 0.58\\n\",\n    \"*   **Hispanic**: 0.58\\n\",\n    \"*   **Other**: 0.64\\n\",\n    \"*   **White**: 0.6\\n\",\n    \"\\n\",\n    \"However, when we look at the false negative rates split by race, our model again incorrectly predicts the likelihood of a user passing the bar at different rates and, this time, does so by a lot. \\n\",\n    \"\\n\",\n    \"*   **Asian**: 0.01\\n\",\n    \"*   **Black**: 0.05\\n\",\n    \"*   **Hispanic**: 0.02\\n\",\n    \"*   **Other**: 0.01\\n\",\n    \"*   **White**: 0.01\\n\",\n    \"\\n\",\n    \"Most notably the difference between Black and White students is about 380%, meaning that our model is nearly 4x more likely to incorrectly predict that a black student will not pass the bar, than a whilte student. If we were to continue with this effort, a practitioner could use these results as a signal that they should spend more time ensuring that their model works well for people from all backgrounds.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"NIdchYPb-_ZV\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Render Fairness Indicators.\\n\",\n    \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"NprhBTCbY1sF\"\n   },\n   \"source\": [\n    \"# tfma.EvalResult\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"6f92-e98Y40r\"\n   },\n   \"source\": [\n    \"The [`eval_result`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult) object, rendered above in `render_fairness_indicator()`, has its own API that can be used to read TFMA results into your programs.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"CDDUxdx-Y8e0\"\n   },\n   \"source\": [\n    \"## [`get_slice_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_slice_names) and [`get_metric_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metric_names)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"oG_mNUNbY98t\"\n   },\n   \"source\": [\n    \"To get the evaluated slices and metrics, you can use the respective functions.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"kbA1sXhCY_G7\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"pp = pprint.PrettyPrinter()\\n\",\n    \"\\n\",\n    \"print(\\\"Slices:\\\")\\n\",\n    \"pp.pprint(eval_result.get_slice_names())\\n\",\n    \"print(\\\"\\\\nMetrics:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metric_names())\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"rA1M8aBmZAk6\"\n   },\n   \"source\": [\n    \"## [`get_metrics_for_slice()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_slice) and [`get_metrics_for_all_slices()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_all_slices)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"a3Ath5MsZCRX\"\n   },\n   \"source\": [\n    \"If you want to get the metrics for a particular slice, you can use `get_metrics_for_slice()`. It returns a dictionary mapping metric names to [metric values](https://github.com/tensorflow/model-analysis/blob/cdb6790dcd7a37c82afb493859b3ef4898963fee/tensorflow_model_analysis/proto/metrics_for_slice.proto#L194).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"9BWg5HoyZDh-\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"baseline_slice = ()\\n\",\n    \"black_slice = (('race1', 'black'),)\\n\",\n    \"\\n\",\n    \"print(\\\"Baseline metric values:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metrics_for_slice(baseline_slice))\\n\",\n    \"print(\\\"Black metric values:\\\")\\n\",\n    \"pp.pprint(eval_result.get_metrics_for_slice(black_slice))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"bDcOxvqBZEfg\"\n   },\n   \"source\": [\n    \"If you want to get the metrics for all slices, `get_metrics_for_all_slices()` returns a dictionary mapping each slice to the corresponding `get_metrics_for_slices(slice)`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"p4NQCi52ZFrw\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"pp.pprint(eval_result.get_metrics_for_all_slices())\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"y-nbqnSTkmW3\"\n   },\n   \"source\": [\n    \"## Conclusion\\n\",\n    \"Within this case study we imported a dataset into a Pandas DataFrame that we then analyzed with Fairness Indicators. Understanding the results of your model and underlying data is an important step in ensuring your model doesn't reflect harmful bias. In the context of this case study we examined the the LSAC dataset and how predictions from this data could be impacted by a students race. The concept of “what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning.”<sup>1</sup> Fairness Indicator is a tool to help mitigate fairness concerns in your machine learning model.\\n\",\n    \"\\n\",\n    \"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](../../).\\n\",\n    \"\\n\",\n    \"---\\n\",\n    \"\\n\",\n    \"1. Hutchinson, B., Mitchell, M. (2018). 50 Years of Test (Un)fairness: Lessons for Machine Learning. https://arxiv.org/abs/1811.10104\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"REV1rBnoBAo1\"\n   },\n   \"source\": [\n    \"## Appendix\\n\",\n    \"\\n\",\n    \"Below are a few functions to help convert ML models to Pandas DataFrame.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"F4qv9GXiBsFA\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# TensorFlow Estimator to Pandas DataFrame:\\n\",\n    \"\\n\",\n    \"# _X_VALUE =  # X value of binary estimator.\\n\",\n    \"# _Y_VALUE =  # Y value of binary estimator.\\n\",\n    \"# _GROUND_TRUTH_LABEL =  # Ground truth value of binary estimator.\\n\",\n    \"\\n\",\n    \"def _get_predicted_probabilities(estimator, input_df, get_input_fn):\\n\",\n    \"  predictions = estimator.predict(\\n\",\n    \"      input_fn=get_input_fn(input_df=input_df, num_epochs=1))\\n\",\n    \"  return [prediction['probabilities'][1] for prediction in predictions]\\n\",\n    \"\\n\",\n    \"def _get_input_fn_law(input_df, num_epochs, batch_size=None):\\n\",\n    \"  return tf.compat.v1.estimator.inputs.pandas_input_fn(\\n\",\n    \"      x=input_df[[_X_VALUE, _Y_VALUE]],\\n\",\n    \"      y=input_df[_GROUND_TRUTH_LABEL],\\n\",\n    \"      num_epochs=num_epochs,\\n\",\n    \"      batch_size=batch_size or len(input_df),\\n\",\n    \"      shuffle=False)\\n\",\n    \"\\n\",\n    \"def estimator_to_dataframe(estimator, input_df, num_keypoints=20):\\n\",\n    \"  x = np.linspace(min(input_df[_X_VALUE]), max(input_df[_X_VALUE]), num_keypoints)\\n\",\n    \"  y = np.linspace(min(input_df[_Y_VALUE]), max(input_df[_Y_VALUE]), num_keypoints)\\n\",\n    \"\\n\",\n    \"  x_grid, y_grid = np.meshgrid(x, y)\\n\",\n    \"\\n\",\n    \"  positions = np.vstack([x_grid.ravel(), y_grid.ravel()])\\n\",\n    \"  plot_df = pd.DataFrame(positions.T, columns=[_X_VALUE, _Y_VALUE])\\n\",\n    \"  plot_df[_GROUND_TRUTH_LABEL] = np.ones(len(plot_df))\\n\",\n    \"  predictions = _get_predicted_probabilities(\\n\",\n    \"      estimator=estimator, input_df=plot_df, get_input_fn=_get_input_fn_law)\\n\",\n    \"  return pd.DataFrame(\\n\",\n    \"      data=np.array(np.reshape(predictions, x_grid.shape)).flatten())\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"colab\": {\n   \"collapsed_sections\": [\n    \"Bfrh3DUze0QN\"\n   ],\n   \"name\": \"Pandas DataFrame to Fairness Indicators Case Study\",\n   \"private_outputs\": true,\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"JmvzTcYice-_\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"zlvAS8a9cD_t\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"b2VYQpTttmVN\"\n   },\n   \"source\": [\n    \"# TensorFlow Constrained Optimization Example Using CelebA Dataset\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"3iFsS2WSeRwe\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"-DQoReGDeN16\"\n   },\n   \"source\": [\n    \"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](../../). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"*   Train a simple, *unconstrained* neural network model to detect a person's smile in images using [`tf.keras`](https://www.tensorflow.org/guide/keras) and the large-scale CelebFaces Attributes ([CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)) dataset.\\n\",\n    \"*   Evaluate model performance against a commonly used fairness metric across age groups, using Fairness Indicators.\\n\",\n    \"*   Set up a simple constrained optimization problem to achieve fairer performance across age groups.\\n\",\n    \"*   Retrain the now *constrained* model and evaluate performance again, ensuring that our chosen fairness metric has improved.\\n\",\n    \"\\n\",\n    \"Last updated: 3/11 Feb 2020\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"JyCbEWt5Zxe2\"\n   },\n   \"source\": [\n    \"# Installation\\n\",\n    \"This notebook was created in [Colaboratory](https://research.google.com/colaboratory/faq.html), connected to the Python 3 Google Compute Engine backend. If you wish to host this notebook in a different environment, then you should not experience any major issues provided you include all the required packages in the cells below.\\n\",\n    \"\\n\",\n    \"Note that the very first time you run the pip installs, you may be asked to restart the runtime because of preinstalled out of date packages. Once you do so, the correct packages will be used.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"T-Zm-KDdt0bn\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Pip installs\\n\",\n    \"!pip install -q -U pip==20.2\\n\",\n    \"\\n\",\n    \"!pip install git+https://github.com/google-research/tensorflow_constrained_optimization\\n\",\n    \"!pip install -q tensorflow-datasets tensorflow\\n\",\n    \"!pip install fairness-indicators \\\\\\n\",\n    \"  \\\"absl-py==0.12.0\\\" \\\\\\n\",\n    \"  \\\"apache-beam<3,>=2.47\\\" \\\\\\n\",\n    \"  \\\"avro-python3==1.9.1\\\" \\\\\\n\",\n    \"  \\\"pyzmq==17.0.0\\\"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UXWXhBLvISOY\"\n   },\n   \"source\": [\n    \"Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"UTBBdSGaZ8aW\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Import Modules\\n\",\n    \"import os\\n\",\n    \"import sys\\n\",\n    \"import tempfile\\n\",\n    \"import urllib\\n\",\n    \"\\n\",\n    \"import tensorflow as tf\\n\",\n    \"from tensorflow import keras\\n\",\n    \"\\n\",\n    \"import tensorflow_datasets as tfds\\n\",\n    \"tfds.disable_progress_bar()\\n\",\n    \"\\n\",\n    \"import numpy as np\\n\",\n    \"\\n\",\n    \"import tensorflow_constrained_optimization as tfco\\n\",\n    \"\\n\",\n    \"from tensorflow_metadata.proto.v0 import schema_pb2\\n\",\n    \"from tfx_bsl.tfxio import tensor_adapter\\n\",\n    \"from tfx_bsl.tfxio import tf_example_record\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"70tLum8uIZUm\"\n   },\n   \"source\": [\n    \"Additionally, we add a few imports that are specific to Fairness Indicators which we will use to evaluate and visualize the model's performance.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"7Se0Z0Bo9K-5\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Fairness Indicators related imports\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"import fairness_indicators as fi\\n\",\n    \"from google.protobuf import text_format\\n\",\n    \"import apache_beam as beam\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"xSG2HP7goGrj\"\n   },\n   \"source\": [\n    \"Although TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default as it is in TensorFlow 2.x. To ensure that nothing breaks, eager execution will be enabled in the cell below.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"W0ZusW1-lBao\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Enable Eager Execution and Print Versions\\n\",\n    \"if tf.__version__ < \\\"2.0.0\\\":\\n\",\n    \"  tf.compat.v1.enable_eager_execution()\\n\",\n    \"  print(\\\"Eager execution enabled.\\\")\\n\",\n    \"else:\\n\",\n    \"  print(\\\"Eager execution enabled by default.\\\")\\n\",\n    \"\\n\",\n    \"print(\\\"TensorFlow \\\" + tf.__version__)\\n\",\n    \"print(\\\"TFMA \\\" + tfma.VERSION_STRING)\\n\",\n    \"print(\\\"TFDS \\\" + tfds.version.__version__)\\n\",\n    \"print(\\\"FI \\\" + fi.version.__version__)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"idY3Uuk3yvty\"\n   },\n   \"source\": [\n    \"# CelebA Dataset\\n\",\n    \"[CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) is a large-scale face attributes dataset with more than 200,000 celebrity images, each with 40 attribute annotations (such as hair type, fashion accessories, facial features, etc.) and 5 landmark locations (eyes, mouth and nose positions). For more details take a look at [the paper](https://liuziwei7.github.io/projects/FaceAttributes.html).\\n\",\n    \"With the permission of the owners, we have stored this dataset on Google Cloud Storage and mostly access it via [TensorFlow Datasets(`tfds`)](https://www.tensorflow.org/datasets).\\n\",\n    \"\\n\",\n    \"In this notebook:\\n\",\n    \"* Our model will attempt to classify whether the subject of the image is smiling, as represented by the \\\"Smiling\\\" attribute<sup>*</sup>.\\n\",\n    \"*   Images will be resized from 218x178 to 28x28 to reduce the execution time and memory when training.\\n\",\n    \"*   Our model's performance will be evaluated across age groups, using the binary \\\"Young\\\" attribute. We will call this \\\"age group\\\" in this notebook.\\n\",\n    \"\\n\",\n    \"___\\n\",\n    \"\\n\",\n    \"<sup>*</sup> While there is little information available about the labeling methodology for this dataset, we will assume that the \\\"Smiling\\\" attribute was determined by a pleased, kind, or amused expression on the subject's face. For the purpose of this case study, we will take these labels as ground truth.\\n\",\n    \"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"zCSemFST0b89\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"gcs_base_dir = \\\"gs://celeb_a_dataset/\\\"\\n\",\n    \"celeb_a_builder = tfds.builder(\\\"celeb_a\\\", data_dir=gcs_base_dir, version='2.0.0')\\n\",\n    \"\\n\",\n    \"celeb_a_builder.download_and_prepare()\\n\",\n    \"\\n\",\n    \"num_test_shards_dict = {'0.3.0': 4, '2.0.0': 2} # Used because we download the test dataset separately\\n\",\n    \"version = str(celeb_a_builder.info.version)\\n\",\n    \"print('Celeb_A dataset version: %s' % version)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"Ocqv3R06APfW\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Test dataset helper functions\\n\",\n    \"local_root = tempfile.mkdtemp(prefix='test-data')\\n\",\n    \"def local_test_filename_base():\\n\",\n    \"  return local_root\\n\",\n    \"\\n\",\n    \"def local_test_file_full_prefix():\\n\",\n    \"  return os.path.join(local_test_filename_base(), \\\"celeb_a-test.tfrecord\\\")\\n\",\n    \"\\n\",\n    \"def copy_test_files_to_local():\\n\",\n    \"  filename_base = local_test_file_full_prefix()\\n\",\n    \"  num_test_shards = num_test_shards_dict[version]\\n\",\n    \"  for shard in range(num_test_shards):\\n\",\n    \"    url = \\\"https://storage.googleapis.com/celeb_a_dataset/celeb_a/%s/celeb_a-test.tfrecord-0000%s-of-0000%s\\\" % (version, shard, num_test_shards)\\n\",\n    \"    filename = \\\"%s-0000%s-of-0000%s\\\" % (filename_base, shard, num_test_shards)\\n\",\n    \"    res = urllib.request.urlretrieve(url, filename)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"u5PDLXZb_uIj\"\n   },\n   \"source\": [\n    \"## Caveats\\n\",\n    \"Before moving forward, there are several considerations to keep in mind in using CelebA:\\n\",\n    \"*   Although in principle this notebook could use any dataset of face images, CelebA was chosen because it contains public domain images of public figures.\\n\",\n    \"*   All of the attribute annotations in CelebA are operationalized as binary categories. For example, the \\\"Young\\\" attribute (as determined by the dataset labelers) is denoted as either present or absent in the image.\\n\",\n    \"*   CelebA's categorizations do not reflect real human diversity of attributes.\\n\",\n    \"*   For the purposes of this notebook, the feature containing the \\\"Young\\\" attribute is referred to as \\\"age group\\\", where the presence of the \\\"Young\\\" attribute in an image is labeled as a member of the \\\"Young\\\" age group and the absence of the \\\"Young\\\" attribute is labeled as a member of the \\\"Not Young\\\" age group. These are assumptions made as this information is not mentioned in the [original paper](http://openaccess.thecvf.com/content_iccv_2015/html/Liu_Deep_Learning_Face_ICCV_2015_paper.html).\\n\",\n    \"*   As such, performance in the models trained in this notebook is tied to the ways the attributes have been operationalized and annotated by the authors of CelebA.\\n\",\n    \"*   This model should not be used for commercial purposes as that would violate [CelebA's non-commercial research agreement](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html).\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Elkiu92cY2bY\"\n   },\n   \"source\": [\n    \"# Setting Up Input Functions\\n\",\n    \"The subsequent cells will help streamline the input pipeline as well as visualize performance.\\n\",\n    \"\\n\",\n    \"First we define some data-related variables and define a requisite preprocessing function.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"gDdarTZxk6y4\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Define Variables\\n\",\n    \"ATTR_KEY = \\\"attributes\\\"\\n\",\n    \"IMAGE_KEY = \\\"image\\\"\\n\",\n    \"LABEL_KEY = \\\"Smiling\\\"\\n\",\n    \"GROUP_KEY = \\\"Young\\\"\\n\",\n    \"IMAGE_SIZE = 28\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"SD-H70Je0cTp\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Define Preprocessing Functions\\n\",\n    \"def preprocess_input_dict(feat_dict):\\n\",\n    \"  # Separate out the image and target variable from the feature dictionary.\\n\",\n    \"  image = feat_dict[IMAGE_KEY]\\n\",\n    \"  label = feat_dict[ATTR_KEY][LABEL_KEY]\\n\",\n    \"  group = feat_dict[ATTR_KEY][GROUP_KEY]\\n\",\n    \"\\n\",\n    \"  # Resize and normalize image.\\n\",\n    \"  image = tf.cast(image, tf.float32)\\n\",\n    \"  image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\\n\",\n    \"  image /= 255.0\\n\",\n    \"\\n\",\n    \"  # Cast label and group to float32.\\n\",\n    \"  label = tf.cast(label, tf.float32)\\n\",\n    \"  group = tf.cast(group, tf.float32)\\n\",\n    \"\\n\",\n    \"  feat_dict[IMAGE_KEY] = image\\n\",\n    \"  feat_dict[ATTR_KEY][LABEL_KEY] = label\\n\",\n    \"  feat_dict[ATTR_KEY][GROUP_KEY] = group\\n\",\n    \"\\n\",\n    \"  return feat_dict\\n\",\n    \"\\n\",\n    \"get_image_and_label = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY])\\n\",\n    \"get_image_label_and_group = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY], feat_dict[ATTR_KEY][GROUP_KEY])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"iwg3sPmExciD\"\n   },\n   \"source\": [\n    \"Then, we build out the data functions we need in the rest of the colab.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"KbR64r0VVG5h\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Train data returning either 2 or 3 elements (the third element being the group)\\n\",\n    \"def celeb_a_train_data_wo_group(batch_size):\\n\",\n    \"  celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\\n\",\n    \"  return celeb_a_train_data.map(get_image_and_label)\\n\",\n    \"def celeb_a_train_data_w_group(batch_size):\\n\",\n    \"  celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\\n\",\n    \"  return celeb_a_train_data.map(get_image_label_and_group)\\n\",\n    \"\\n\",\n    \"# Test data for the overall evaluation\\n\",\n    \"celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)\\n\",\n    \"# Copy test data locally to be able to read it into tfma\\n\",\n    \"copy_test_files_to_local()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"NXO3woTxiCk0\"\n   },\n   \"source\": [\n    \"# Build a simple DNN Model\\n\",\n    \"Because this notebook focuses on TFCO, we will assemble a simple, unconstrained `tf.keras.Sequential` model.\\n\",\n    \"\\n\",\n    \"We may be able to greatly improve model performance by adding some complexity (e.g., more densely-connected layers, exploring different activation functions, increasing image size), but that may distract from the goal of demonstrating how easy it is to apply the TFCO library when working with Keras. For that reason, the model will be kept simple — but feel encouraged to explore this space.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"RNZhN_zU8DRD\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def create_model():\\n\",\n    \"  # For this notebook, accuracy will be used to evaluate performance.\\n\",\n    \"  METRICS = [\\n\",\n    \"    tf.keras.metrics.BinaryAccuracy(name='accuracy')\\n\",\n    \"  ]\\n\",\n    \"\\n\",\n    \"  # The model consists of:\\n\",\n    \"  # 1. An input layer that represents the 28x28x3 image flatten.\\n\",\n    \"  # 2. A fully connected layer with 64 units activated by a ReLU function.\\n\",\n    \"  # 3. A single-unit readout layer to output real-scores instead of probabilities.\\n\",\n    \"  model = keras.Sequential([\\n\",\n    \"      keras.layers.Flatten(input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), name='image'),\\n\",\n    \"      keras.layers.Dense(64, activation='relu'),\\n\",\n    \"      keras.layers.Dense(1, activation=None)\\n\",\n    \"  ])\\n\",\n    \"\\n\",\n    \"  # TFCO by default uses hinge loss — and that will also be used in the model.\\n\",\n    \"  model.compile(\\n\",\n    \"      optimizer=tf.keras.optimizers.Adam(0.001),\\n\",\n    \"      loss='hinge',\\n\",\n    \"      metrics=METRICS)\\n\",\n    \"  return model\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"7A4uKPNVzPVO\"\n   },\n   \"source\": [\n    \"We also define a function to set seeds to ensure reproducible results. Note that this colab is meant as an educational tool and does not have the stability of a finely tuned production pipeline. Running without setting a seed may lead to varied results. \"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"-IVw4EgKzqSF\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def set_seeds():\\n\",\n    \"  np.random.seed(121212)\\n\",\n    \"  tf.compat.v1.set_random_seed(212121)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Xrbjmmeom8pA\"\n   },\n   \"source\": [\n    \"# Fairness Indicators Helper Functions\\n\",\n    \"Before training our model, we define a number of helper functions that will allow us to evaluate the model's performance via Fairness Indicators.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"1EPF_k620CRN\"\n   },\n   \"source\": [\n    \"First, we create a helper function to save our model once we train it.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"ejHbhLW5epar\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def save_model(model, subdir):\\n\",\n    \"  base_dir = tempfile.mkdtemp(prefix='saved_models')\\n\",\n    \"  model_location = os.path.join(base_dir, subdir)\\n\",\n    \"  model.save(model_location, save_format='tf')\\n\",\n    \"  return model_location\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"erhKEvqByCNj\"\n   },\n   \"source\": [\n    \"Next, we define functions used to preprocess the data in order to correctly pass it through to TFMA.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"D2qa8Okwj_U3\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Data Preprocessing functions for \\n\",\n    \"def tfds_filepattern_for_split(dataset_name, split):\\n\",\n    \"  return f\\\"{local_test_file_full_prefix()}*\\\"\\n\",\n    \"\\n\",\n    \"class PreprocessCelebA(object):\\n\",\n    \"  \\\"\\\"\\\"Class that deserializes, decodes and applies additional preprocessing for CelebA input.\\\"\\\"\\\"\\n\",\n    \"  def __init__(self, dataset_name):\\n\",\n    \"    builder = tfds.builder(dataset_name)\\n\",\n    \"    self.features = builder.info.features\\n\",\n    \"    example_specs = self.features.get_serialized_info()\\n\",\n    \"    self.parser = tfds.core.example_parser.ExampleParser(example_specs)\\n\",\n    \"\\n\",\n    \"  def __call__(self, serialized_example):\\n\",\n    \"    # Deserialize\\n\",\n    \"    deserialized_example = self.parser.parse_example(serialized_example)\\n\",\n    \"    # Decode\\n\",\n    \"    decoded_example = self.features.decode_example(deserialized_example)\\n\",\n    \"    # Additional preprocessing\\n\",\n    \"    image = decoded_example[IMAGE_KEY]\\n\",\n    \"    label = decoded_example[ATTR_KEY][LABEL_KEY]\\n\",\n    \"    # Resize and scale image.\\n\",\n    \"    image = tf.cast(image, tf.float32)\\n\",\n    \"    image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\\n\",\n    \"    image /= 255.0\\n\",\n    \"    image = tf.reshape(image, [-1])\\n\",\n    \"    # Cast label and group to float32.\\n\",\n    \"    label = tf.cast(label, tf.float32)\\n\",\n    \"\\n\",\n    \"    group = decoded_example[ATTR_KEY][GROUP_KEY]\\n\",\n    \"    \\n\",\n    \"    output = tf.train.Example()\\n\",\n    \"    output.features.feature[IMAGE_KEY].float_list.value.extend(image.numpy().tolist())\\n\",\n    \"    output.features.feature[LABEL_KEY].float_list.value.append(label.numpy())\\n\",\n    \"    output.features.feature[GROUP_KEY].bytes_list.value.append(b\\\"Young\\\" if group.numpy() else b'Not Young')\\n\",\n    \"    return output.SerializeToString()\\n\",\n    \"\\n\",\n    \"def tfds_as_pcollection(beam_pipeline, dataset_name, split):\\n\",\n    \"  return (\\n\",\n    \"      beam_pipeline\\n\",\n    \"   | 'Read records' >> beam.io.ReadFromTFRecord(tfds_filepattern_for_split(dataset_name, split))\\n\",\n    \"   | 'Preprocess' >> beam.Map(PreprocessCelebA(dataset_name))\\n\",\n    \"  )\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"fBKvxd2Tz3hK\"\n   },\n   \"source\": [\n    \"Finally, we define a function that evaluates the results in TFMA.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"30YduitftaNB\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def get_eval_results(model_location, eval_subdir):\\n\",\n    \"  base_dir = tempfile.mkdtemp(prefix='saved_eval_results')\\n\",\n    \"  tfma_eval_result_path = os.path.join(base_dir, eval_subdir)\\n\",\n    \"\\n\",\n    \"  eval_config_pbtxt = \\\"\\\"\\\"\\n\",\n    \"        model_specs {\\n\",\n    \"          label_key: \\\"%s\\\"\\n\",\n    \"        }\\n\",\n    \"        metrics_specs {\\n\",\n    \"          metrics {\\n\",\n    \"            class_name: \\\"FairnessIndicators\\\"\\n\",\n    \"            config: '{ \\\"thresholds\\\": [0.22, 0.5, 0.75] }'\\n\",\n    \"          }\\n\",\n    \"          metrics {\\n\",\n    \"            class_name: \\\"ExampleCount\\\"\\n\",\n    \"          }\\n\",\n    \"        }\\n\",\n    \"        slicing_specs {}\\n\",\n    \"        slicing_specs { feature_keys: \\\"%s\\\" }\\n\",\n    \"        options {\\n\",\n    \"          compute_confidence_intervals { value: False }\\n\",\n    \"          disabled_outputs{values: \\\"analysis\\\"}\\n\",\n    \"        }\\n\",\n    \"      \\\"\\\"\\\" % (LABEL_KEY, GROUP_KEY)\\n\",\n    \"      \\n\",\n    \"  eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())\\n\",\n    \"\\n\",\n    \"  eval_shared_model = tfma.default_eval_shared_model(\\n\",\n    \"        eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING])\\n\",\n    \"\\n\",\n    \"  schema_pbtxt = \\\"\\\"\\\"\\n\",\n    \"        tensor_representation_group {\\n\",\n    \"          key: \\\"\\\"\\n\",\n    \"          value {\\n\",\n    \"            tensor_representation {\\n\",\n    \"              key: \\\"%s\\\"\\n\",\n    \"              value {\\n\",\n    \"                dense_tensor {\\n\",\n    \"                  column_name: \\\"%s\\\"\\n\",\n    \"                  shape {\\n\",\n    \"                    dim { size: 28 }\\n\",\n    \"                    dim { size: 28 }\\n\",\n    \"                    dim { size: 3 }\\n\",\n    \"                  }\\n\",\n    \"                }\\n\",\n    \"              }\\n\",\n    \"            }\\n\",\n    \"          }\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"%s\\\"\\n\",\n    \"          type: FLOAT\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"%s\\\"\\n\",\n    \"          type: FLOAT\\n\",\n    \"        }\\n\",\n    \"        feature {\\n\",\n    \"          name: \\\"%s\\\"\\n\",\n    \"          type: BYTES\\n\",\n    \"        }\\n\",\n    \"        \\\"\\\"\\\" % (IMAGE_KEY, IMAGE_KEY, IMAGE_KEY, LABEL_KEY, GROUP_KEY)\\n\",\n    \"  schema = text_format.Parse(schema_pbtxt, schema_pb2.Schema())\\n\",\n    \"  coder = tf_example_record.TFExampleBeamRecord(\\n\",\n    \"      physical_format='inmem', schema=schema,\\n\",\n    \"      raw_record_column_name=tfma.ARROW_INPUT_COLUMN)\\n\",\n    \"  tensor_adapter_config = tensor_adapter.TensorAdapterConfig(\\n\",\n    \"    arrow_schema=coder.ArrowSchema(),\\n\",\n    \"    tensor_representations=coder.TensorRepresentations())\\n\",\n    \"  # Run the fairness evaluation.\\n\",\n    \"  with beam.Pipeline() as pipeline:\\n\",\n    \"    _ = (\\n\",\n    \"          tfds_as_pcollection(pipeline, 'celeb_a', 'test')\\n\",\n    \"          | 'ExamplesToRecordBatch' >> coder.BeamSource()\\n\",\n    \"          | 'ExtractEvaluateAndWriteResults' >>\\n\",\n    \"          tfma.ExtractEvaluateAndWriteResults(\\n\",\n    \"              eval_config=eval_config,\\n\",\n    \"              eval_shared_model=eval_shared_model,\\n\",\n    \"              output_path=tfma_eval_result_path,\\n\",\n    \"              tensor_adapter_config=tensor_adapter_config)\\n\",\n    \"    )\\n\",\n    \"  return tfma.load_eval_result(output_path=tfma_eval_result_path)\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"76tZ3vk-tyo9\"\n   },\n   \"source\": [\n    \"# Train & Evaluate Unconstrained Model\\n\",\n    \"\\n\",\n    \"With the model now defined and the input pipeline in place, we’re now ready to train our model. To cut back on the amount of execution time and memory, we will train the model by slicing the data into small batches with only a few repeated iterations.\\n\",\n    \"\\n\",\n    \"Note that running this notebook in TensorFlow < 2.0.0 may result in a deprecation warning for `np.where`. Safely ignore this warning as TensorFlow addresses this in 2.X by using `tf.where` in place of `np.where`.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"3m9OOdU_8GWo\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"BATCH_SIZE = 32\\n\",\n    \"\\n\",\n    \"# Set seeds to get reproducible results\\n\",\n    \"set_seeds()\\n\",\n    \"\\n\",\n    \"model_unconstrained = create_model()\\n\",\n    \"model_unconstrained.fit(celeb_a_train_data_wo_group(BATCH_SIZE), epochs=5, steps_per_epoch=1000)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"nCtBH9DkvtUy\"\n   },\n   \"source\": [\n    \"Evaluating the model on the test data should result in a final accuracy score of just over 85%. Not bad for a simple model with no fine tuning.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"mgsjbxpTIdZf\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print('Overall Results, Unconstrained')\\n\",\n    \"celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)\\n\",\n    \"results = model_unconstrained.evaluate(celeb_a_test_data)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"L5jslIrzwIKo\"\n   },\n   \"source\": [\n    \"However, performance evaluated across age groups may reveal some shortcomings.\\n\",\n    \"\\n\",\n    \"To explore this further, we evaluate the model with Fairness Indicators (via TFMA). In particular, we are interested in seeing whether there is a significant gap in performance between \\\"Young\\\" and \\\"Not Young\\\" categories when evaluated on false positive rate.\\n\",\n    \"\\n\",\n    \"A false positive error occurs when the model incorrectly predicts the positive class. In this context, a false positive outcome occurs when the ground truth is an image of a celebrity 'Not Smiling' and the model predicts 'Smiling'. By extension, the false positive rate, which is used in the visualization above, is a measure of accuracy for a test. While this is a relatively mundane error to make in this context, false positive errors can sometimes cause more problematic behaviors. For instance, a false positive error in a spam classifier could cause a user to miss an important email.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"nFL91nZF1V8D\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model_location = save_model(model_unconstrained, 'model_export_unconstrained')\\n\",\n    \"eval_results_unconstrained = get_eval_results(model_location, 'eval_results_unconstrained')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"34zHIMW0NHld\"\n   },\n   \"source\": [\n    \"As mentioned above, we are concentrating on the false positive rate. The current version of Fairness Indicators (0.1.2) selects false negative rate by default. After running the line below, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"KXMVmUMi0ydk\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_results_unconstrained)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"zYVpZ-DpBsfD\"\n   },\n   \"source\": [\n    \"As the results show above, we do see a **disproportionate gap between \\\"Young\\\" and \\\"Not Young\\\" categories**.\\n\",\n    \"\\n\",\n    \"This is where TFCO can help by constraining the false positive rate to be within a more acceptable criterion.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ZNnI_Eu70gVp\"\n   },\n   \"source\": [\n    \"# Constrained Model Set Up\\n\",\n    \"As documented in [TFCO's library](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md), there are several helpers that will make it easier to constrain the problem:\\n\",\n    \"\\n\",\n    \"1.   `tfco.rate_context()` – This is what will be used in constructing a constraint for each age group category.\\n\",\n    \"2.   `tfco.RateMinimizationProblem()`– The rate expression to be minimized here will be the false positive rate subject to age group. In other words, performance now will be evaluated based on the difference between the false positive rates of the age group and that of the overall dataset. For this demonstration, a false positive rate of less than or equal to 5% will be set as the constraint.\\n\",\n    \"3.   `tfco.ProxyLagrangianOptimizerV2()` – This is the helper that will actually solve the rate constraint problem.\\n\",\n    \"\\n\",\n    \"The cell below will call on these helpers to set up model training with the fairness constraint.\\n\",\n    \"\\n\",\n    \"\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"BTukzvfD6iWr\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# The batch size is needed to create the input, labels and group tensors.\\n\",\n    \"# These tensors are initialized with all 0's. They will eventually be assigned\\n\",\n    \"# the batch content to them. A large batch size is chosen so that there are\\n\",\n    \"# enough number of \\\"Young\\\" and \\\"Not Young\\\" examples in each batch.\\n\",\n    \"set_seeds()\\n\",\n    \"model_constrained = create_model()\\n\",\n    \"BATCH_SIZE = 32\\n\",\n    \"\\n\",\n    \"# Create input tensor.\\n\",\n    \"input_tensor = tf.Variable(\\n\",\n    \"    np.zeros((BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, 3), dtype=\\\"float32\\\"),\\n\",\n    \"    name=\\\"input\\\")\\n\",\n    \"\\n\",\n    \"# Create labels and group tensors (assuming both labels and groups are binary).\\n\",\n    \"labels_tensor = tf.Variable(\\n\",\n    \"    np.zeros(BATCH_SIZE, dtype=\\\"float32\\\"), name=\\\"labels\\\")\\n\",\n    \"groups_tensor = tf.Variable(\\n\",\n    \"    np.zeros(BATCH_SIZE, dtype=\\\"float32\\\"), name=\\\"groups\\\")\\n\",\n    \"\\n\",\n    \"# Create a function that returns the applied 'model' to the input tensor\\n\",\n    \"# and generates constrained predictions.\\n\",\n    \"def predictions():\\n\",\n    \"  return model_constrained(input_tensor)\\n\",\n    \"\\n\",\n    \"# Create overall context and subsetted context.\\n\",\n    \"# The subsetted context contains subset of examples where group attribute < 1\\n\",\n    \"# (i.e. the subset of \\\"Not Young\\\" celebrity images).\\n\",\n    \"# \\\"groups_tensor < 1\\\" is used instead of \\\"groups_tensor == 0\\\" as the former\\n\",\n    \"# would be a comparison on the tensor value, while the latter would be a\\n\",\n    \"# comparison on the Tensor object.\\n\",\n    \"context = tfco.rate_context(predictions, labels=lambda:labels_tensor)\\n\",\n    \"context_subset = context.subset(lambda:groups_tensor < 1)\\n\",\n    \"\\n\",\n    \"# Setup list of constraints.\\n\",\n    \"# In this notebook, the constraint will just be: FPR to less or equal to 5%.\\n\",\n    \"constraints = [tfco.false_positive_rate(context_subset) <= 0.05]\\n\",\n    \"\\n\",\n    \"# Setup rate minimization problem: minimize overall error rate s.t. constraints.\\n\",\n    \"problem = tfco.RateMinimizationProblem(tfco.error_rate(context), constraints)\\n\",\n    \"\\n\",\n    \"# Create constrained optimizer and obtain train_op.\\n\",\n    \"# Separate optimizers are specified for the objective and constraints\\n\",\n    \"optimizer = tfco.ProxyLagrangianOptimizerV2(\\n\",\n    \"      optimizer=tf.keras.optimizers.legacy.Adam(learning_rate=0.001),\\n\",\n    \"      constraint_optimizer=tf.keras.optimizers.legacy.Adam(learning_rate=0.001),\\n\",\n    \"      num_constraints=problem.num_constraints)\\n\",\n    \"\\n\",\n    \"# A list of all trainable variables is also needed to use TFCO.\\n\",\n    \"var_list = (model_constrained.trainable_weights + list(problem.trainable_variables) +\\n\",\n    \"            optimizer.trainable_variables())\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"thEe8A8UYbrO\"\n   },\n   \"source\": [\n    \"The model is now set up and ready to be trained with the false positive rate constraint across age group.\\n\",\n    \"\\n\",\n    \"Now, because the last iteration of the constrained model may not necessarily be the best performing model in terms of the defined constraint, the TFCO library comes equipped with `tfco.find_best_candidate_index()` that can help choose the best iterate out of the ones found after each epoch. Think of `tfco.find_best_candidate_index()` as an added heuristic that ranks each of the outcomes based on accuracy and fairness constraint (in this case, false positive rate across age group) separately with respect to the training data. That way, it can search for a better trade-off between overall accuracy and the fairness constraint.\\n\",\n    \"\\n\",\n    \"The following cells will start the training with constraints while also finding the best performing model per iteration.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"73doG4HL6nPS\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Obtain train set batches.\\n\",\n    \"\\n\",\n    \"NUM_ITERATIONS = 100  # Number of training iterations.\\n\",\n    \"SKIP_ITERATIONS = 10  # Print training stats once in this many iterations.\\n\",\n    \"\\n\",\n    \"# Create temp directory for saving snapshots of models.\\n\",\n    \"temp_directory = tempfile.mktemp()\\n\",\n    \"os.mkdir(temp_directory)\\n\",\n    \"\\n\",\n    \"# List of objective and constraints across iterations.\\n\",\n    \"objective_list = []\\n\",\n    \"violations_list = []\\n\",\n    \"\\n\",\n    \"# Training iterations.\\n\",\n    \"iteration_count = 0\\n\",\n    \"for (image, label, group) in celeb_a_train_data_w_group(BATCH_SIZE):\\n\",\n    \"  # Assign current batch to input, labels and groups tensors.\\n\",\n    \"  input_tensor.assign(image)\\n\",\n    \"  labels_tensor.assign(label)\\n\",\n    \"  groups_tensor.assign(group)\\n\",\n    \"\\n\",\n    \"  # Run gradient update.\\n\",\n    \"  optimizer.minimize(problem, var_list=var_list)\\n\",\n    \"\\n\",\n    \"  # Record objective and violations.\\n\",\n    \"  objective = problem.objective()\\n\",\n    \"  violations = problem.constraints()\\n\",\n    \"\\n\",\n    \"  sys.stdout.write(\\n\",\n    \"      \\\"\\\\r Iteration %d: Hinge Loss = %.3f, Max. Constraint Violation = %.3f\\\"\\n\",\n    \"      % (iteration_count + 1, objective, max(violations)))\\n\",\n    \"\\n\",\n    \"  # Snapshot model once in SKIP_ITERATIONS iterations.\\n\",\n    \"  if iteration_count % SKIP_ITERATIONS == 0:\\n\",\n    \"    objective_list.append(objective)\\n\",\n    \"    violations_list.append(violations)\\n\",\n    \"\\n\",\n    \"    # Save snapshot of model weights.\\n\",\n    \"    model_constrained.save_weights(\\n\",\n    \"        temp_directory + \\\"/celeb_a_constrained_\\\" +\\n\",\n    \"        str(iteration_count / SKIP_ITERATIONS) + \\\".h5\\\")\\n\",\n    \"\\n\",\n    \"  iteration_count += 1\\n\",\n    \"  if iteration_count >= NUM_ITERATIONS:\\n\",\n    \"    break\\n\",\n    \"\\n\",\n    \"# Choose best model from recorded iterates and load that model.\\n\",\n    \"best_index = tfco.find_best_candidate_index(\\n\",\n    \"    np.array(objective_list), np.array(violations_list))\\n\",\n    \"\\n\",\n    \"model_constrained.load_weights(\\n\",\n    \"    temp_directory + \\\"/celeb_a_constrained_\\\" + str(best_index) + \\\".0.h5\\\")\\n\",\n    \"\\n\",\n    \"# Remove temp directory.\\n\",\n    \"os.system(\\\"rm -r \\\" + temp_directory)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"6r-6_R_gSrsT\"\n   },\n   \"source\": [\n    \"After having applied the constraint, we evaluate the results once again using Fairness Indicators.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"5G6B3OR9CUmo\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model_location = save_model(model_constrained, 'model_export_constrained')\\n\",\n    \"eval_result_constrained = get_eval_results(model_location, 'eval_results_constrained')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"sVteOnE80ATS\"\n   },\n   \"source\": [\n    \"As with the previous time we used Fairness Indicators, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.\\n\",\n    \"\\n\",\n    \"Note that to fairly compare the two versions of our model, it is important to use thresholds that set the overall false positive rate to be roughly equal. This ensures that we are looking at actual change as opposed to just a shift in the model equivalent to simply moving the threshold boundary. In our case, comparing the unconstrained model at 0.5 and the constrained model at 0.22 provides a fair comparison for the models.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"GRIjYftvuc7b\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"eval_results_dict = {\\n\",\n    \"    'constrained': eval_result_constrained,\\n\",\n    \"    'unconstrained': eval_results_unconstrained,\\n\",\n    \"}\\n\",\n    \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(multi_eval_results=eval_results_dict)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"lrT-7EBrcBvV\"\n   },\n   \"source\": [\n    \"With TFCO's ability to express a more complex requirement as a rate constraint, we helped this model achieve a more desirable outcome with little impact to the overall performance. There is, of course, still room for improvement, but at least TFCO was able to find a model that gets close to satisfying the constraint and reduces the disparity between the groups as much as possible.\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"colab\": {\n   \"collapsed_sections\": [],\n   \"name\": \"Fairness Indicators TFCO CelebA Case Study.ipynb\",\n   \"private_outputs\": true,\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jMqk3Z8EciF8\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"XbpNOB-vJVKu\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"bqdaOVRxWs8v\"\n   },\n   \"source\": [\n    \"# Wiki Talk Comments Toxicity Prediction\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"EG_KEDkodWsT\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"y6T5tlXcdW7J\"\n   },\n   \"source\": [\n    \"In this example, we consider the task of predicting whether a discussion comment posted on a Wiki talk page contains toxic content (i.e. contains content that is “rude, disrespectful or unreasonable”). We use a public <a href=\\\"https://figshare.com/articles/Wikipedia_Talk_Labels_Toxicity/4563973\\\">dataset</a> released by the <a href=\\\"https://conversationai.github.io/\\\">Conversation AI</a> project, which contains over 100k comments from the English Wikipedia that are annotated by crowd workers  (see [paper](https://arxiv.org/pdf/1610.08914.pdf) for labeling methodology).\\n\",\n    \"\\n\",\n    \"One of the challenges with this dataset is that a very small proportion of the comments cover sensitive topics such as sexuality or religion. As such, training a neural network model on this dataset leads to disparate performance on the smaller sensitive topics. This can mean that innocuous statements about those topics might get incorrectly flagged as ‘toxic’ at higher rates, causing speech to be unfairly censored\\n\",\n    \"\\n\",\n    \"By imposing constraints during training, we can train a *fairer* model that performs more equitably across the different topic groups. \\n\",\n    \"\\n\",\n    \"We will use the TFCO library to optimize for our fairness goal during training.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"DG_C2gsAKV7x\"\n   },\n   \"source\": [\n    \"## Installation\\n\",\n    \"\\n\",\n    \"Let's first install and import the relevant libraries. Note that you may have to restart your colab once after running the first cell because of outdated packages in the runtime. After doing so, there should be no further issues with imports.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"0XOLn8Pyrc_s\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title pip installs\\n\",\n    \"!pip install git+https://github.com/google-research/tensorflow_constrained_optimization\\n\",\n    \"!pip install git+https://github.com/tensorflow/fairness-indicators\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"2ZkQDo2xcDXU\"\n   },\n   \"source\": [\n    \"Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"nd_Y6CTnWs8w\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Import Modules\\n\",\n    \"import io\\n\",\n    \"import os\\n\",\n    \"import shutil\\n\",\n    \"import sys\\n\",\n    \"import tempfile\\n\",\n    \"import time\\n\",\n    \"import urllib\\n\",\n    \"import zipfile\\n\",\n    \"\\n\",\n    \"import apache_beam as beam\\n\",\n    \"from IPython.display import display\\n\",\n    \"from IPython.display import HTML\\n\",\n    \"import numpy as np\\n\",\n    \"import pandas as pd\\n\",\n    \"\\n\",\n    \"import tensorflow as tf\\n\",\n    \"import tensorflow.keras as keras\\n\",\n    \"from tensorflow.keras import layers\\n\",\n    \"from tensorflow.keras.preprocessing import sequence\\n\",\n    \"from tensorflow.keras.preprocessing import text\\n\",\n    \"import tensorflow_constrained_optimization as tfco\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"import fairness_indicators as fi\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.view import widget_view\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor\\n\",\n    \"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"GvqR564dLEVa\"\n   },\n   \"source\": [\n    \"Though TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default. To ensure that nothing breaks, eager execution will be enabled in the cell below.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"avMBqzjWct4Z\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Enable Eager Execution and Print Versions\\n\",\n    \"if tf.__version__ < \\\"2.0.0\\\":\\n\",\n    \"  tf.enable_eager_execution()\\n\",\n    \"  print(\\\"Eager execution enabled.\\\")\\n\",\n    \"else:\\n\",\n    \"  print(\\\"Eager execution enabled by default.\\\")\\n\",\n    \"\\n\",\n    \"print(\\\"TensorFlow \\\" + tf.__version__)\\n\",\n    \"print(\\\"TFMA \\\" + tfma.__version__)\\n\",\n    \"print(\\\"FI \\\" + fi.version.__version__)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"YUJyWaAwWs83\"\n   },\n   \"source\": [\n    \"## Hyper-parameters\\n\",\n    \"\\n\",\n    \"First, we set some hyper-parameters needed for the data preprocessing and model training.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"1aXlwlqTWs84\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"hparams = {\\n\",\n    \"    \\\"batch_size\\\": 128,\\n\",\n    \"    \\\"cnn_filter_sizes\\\": [128, 128, 128],\\n\",\n    \"    \\\"cnn_kernel_sizes\\\": [5, 5, 5],\\n\",\n    \"    \\\"cnn_pooling_sizes\\\": [5, 5, 40],\\n\",\n    \"    \\\"constraint_learning_rate\\\": 0.01,\\n\",\n    \"    \\\"embedding_dim\\\": 100,\\n\",\n    \"    \\\"embedding_trainable\\\": False,\\n\",\n    \"    \\\"learning_rate\\\": 0.005,\\n\",\n    \"    \\\"max_num_words\\\": 10000,\\n\",\n    \"    \\\"max_sequence_length\\\": 250\\n\",\n    \"}\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"0PMs8Iwxq98C\"\n   },\n   \"source\": [\n    \"## Load and pre-process dataset\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"DIe2JRDeWs87\"\n   },\n   \"source\": [\n    \"Next, we download the dataset and preprocess it. The train, test and validation sets are provided as separate CSV files.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"rcd2CV7pWs88\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"toxicity_data_url = (\\\"https://github.com/conversationai/unintended-ml-bias-analysis/\\\"\\n\",\n    \"                     \\\"raw/e02b9f12b63a39235e57ba6d3d62d8139ca5572c/data/\\\")\\n\",\n    \"\\n\",\n    \"data_train = pd.read_csv(toxicity_data_url + \\\"wiki_train.csv\\\")\\n\",\n    \"data_test = pd.read_csv(toxicity_data_url + \\\"wiki_test.csv\\\")\\n\",\n    \"data_vali = pd.read_csv(toxicity_data_url + \\\"wiki_dev.csv\\\")\\n\",\n    \"\\n\",\n    \"data_train.head()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Ojo617RIWs8_\"\n   },\n   \"source\": [\n    \"The `comment` column contains the discussion comments and `is_toxic` column indicates whether or not a comment is annotated as toxic. \\n\",\n    \"\\n\",\n    \"In the following, we:\\n\",\n    \"1. Separate out the labels\\n\",\n    \"2. Tokenize the text comments\\n\",\n    \"3. Identify comments that contain sensitive topic terms \\n\",\n    \"\\n\",\n    \"First, we separate the labels from the train, test and validation sets. The labels are all binary (0 or 1).\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"mxo7ny90Ws9A\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"labels_train = data_train[\\\"is_toxic\\\"].values.reshape(-1, 1) * 1.0\\n\",\n    \"labels_test = data_test[\\\"is_toxic\\\"].values.reshape(-1, 1) * 1.0\\n\",\n    \"labels_vali = data_vali[\\\"is_toxic\\\"].values.reshape(-1, 1) * 1.0\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"alrWi6jUWs9C\"\n   },\n   \"source\": [\n    \"Next, we tokenize the textual comments using the `Tokenizer` provided by `Keras`. We use the training set comments alone to build a vocabulary of tokens, and use them to convert all the comments into a (padded) sequence of tokens of the same length.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"yvOTBsrHWs9D\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"tokenizer = text.Tokenizer(num_words=hparams[\\\"max_num_words\\\"])\\n\",\n    \"tokenizer.fit_on_texts(data_train[\\\"comment\\\"])\\n\",\n    \"\\n\",\n    \"def prep_text(texts, tokenizer, max_sequence_length):\\n\",\n    \"    # Turns text into into padded sequences.\\n\",\n    \"    text_sequences = tokenizer.texts_to_sequences(texts)\\n\",\n    \"    return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)\\n\",\n    \"\\n\",\n    \"text_train = prep_text(data_train[\\\"comment\\\"], tokenizer, hparams[\\\"max_sequence_length\\\"])\\n\",\n    \"text_test = prep_text(data_test[\\\"comment\\\"], tokenizer, hparams[\\\"max_sequence_length\\\"])\\n\",\n    \"text_vali = prep_text(data_vali[\\\"comment\\\"], tokenizer, hparams[\\\"max_sequence_length\\\"])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Cn5zbgp-Ws9F\"\n   },\n   \"source\": [\n    \"Finally, we identify comments related to certain sensitive topic groups. We consider a subset of the <a href=\\\"https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/bias_madlibs_data/adjectives_people.txt\\\">identity terms</a> provided with the dataset and group them into\\n\",\n    \"four broad topic groups: *sexuality*, *gender identity*, *religion*, and *race*.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"EnFfV2gEWs9G\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"terms = {\\n\",\n    \"    'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'], \\n\",\n    \"    'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],\\n\",\n    \"    'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],\\n\",\n    \"    'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina', \\n\",\n    \"             'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese', \\n\",\n    \"             'japanese']}\\n\",\n    \"\\n\",\n    \"group_names = list(terms.keys())\\n\",\n    \"num_groups = len(group_names)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ooI3F5M4Ws9I\"\n   },\n   \"source\": [\n    \"We then create separate group membership matrices for the train, test and validation sets, where the rows correspond to comments, the columns correspond to the four sensitive groups, and each entry is a boolean indicating whether the comment contains a term from the topic group.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"zO7PyNckWs9J\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def get_groups(text):\\n\",\n    \"    # Returns a boolean NumPy array of shape (n, k), where n is the number of comments, \\n\",\n    \"    # and k is the number of groups. Each entry (i, j) indicates if the i-th comment \\n\",\n    \"    # contains a term from the j-th group.\\n\",\n    \"    groups = np.zeros((text.shape[0], num_groups))\\n\",\n    \"    for ii in range(num_groups):\\n\",\n    \"        groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)\\n\",\n    \"    return groups\\n\",\n    \"\\n\",\n    \"groups_train = get_groups(data_train[\\\"comment\\\"])\\n\",\n    \"groups_test = get_groups(data_test[\\\"comment\\\"])\\n\",\n    \"groups_vali = get_groups(data_vali[\\\"comment\\\"])\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"GFAI6AB9Ws9L\"\n   },\n   \"source\": [\n    \"As shown below, all four topic groups constitute only a small fraction of the overall dataset, and have varying proportions of toxic comments.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"8Ug4u_P9Ws9M\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"print(\\\"Overall label proportion = %.1f%%\\\" % (labels_train.mean() * 100))\\n\",\n    \"\\n\",\n    \"group_stats = []\\n\",\n    \"for ii in range(num_groups):\\n\",\n    \"    group_proportion = groups_train[:, ii].mean()\\n\",\n    \"    group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()\\n\",\n    \"    group_stats.append([group_names[ii],\\n\",\n    \"                        \\\"%.2f%%\\\" % (group_proportion * 100), \\n\",\n    \"                        \\\"%.1f%%\\\" % (group_pos_proportion * 100)])\\n\",\n    \"group_stats = pd.DataFrame(group_stats, \\n\",\n    \"                           columns=[\\\"Topic group\\\", \\\"Group proportion\\\", \\\"Label proportion\\\"])\\n\",\n    \"group_stats\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"aG5ZKKrVWs9O\"\n   },\n   \"source\": [\n    \"We see that only 1.3% of the dataset contains comments related to sexuality. Among them, 37% of the comments have been annotated as being toxic. Note that this is significantly larger than the overall proportion of comments annotated as toxic. This could be because the few comments that used those identity terms did so in pejorative contexts. As mentioned above, this could cause our model to disporportionately misclassify comments as toxic when they include those terms. Since this is the concern, we'll make sure to look at the **False Positive Rate** when we evaluate the model's performance.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"5DkJpKaLWs9P\"\n   },\n   \"source\": [\n    \"## Build CNN toxicity prediction model\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"niJ4KIJgWs9Q\"\n   },\n   \"source\": [\n    \"Having prepared the dataset, we now build a `Keras` model for prediction toxicity. The model we use is a convolutional neural network (CNN) with the same architecture used by the Conversation AI project for their debiasing analysis. We adapt <a href=\\\"https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/model_tool.py\\\">code</a> provided by them to construct the model layers.\\n\",\n    \"\\n\",\n    \"The model uses an embedding layer to convert the text tokens to fixed-length vectors. This layer converts the input text sequence into a sequence of vectors, and passes them through several layers of convolution and pooling operations, followed by a final fully-connected layer.\\n\",\n    \"\\n\",\n    \"We make use of pre-trained GloVe word vector embeddings, which we download below. This may take a few minutes to complete.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"yevbBL2oWs9Q\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"zip_file_url = \\\"http://nlp.stanford.edu/data/glove.6B.zip\\\"\\n\",\n    \"zip_file = urllib.request.urlopen(zip_file_url)\\n\",\n    \"archive = zipfile.ZipFile(io.BytesIO(zip_file.read()))\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"a11-YWDnWs9S\"\n   },\n   \"source\": [\n    \"We use the downloaded GloVe embeddings to create an embedding matrix, where the rows contain the word embeddings for the tokens in the `Tokenizer`'s vocabulary. \"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"bBS74MMYWs9T\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"embeddings_index = {}\\n\",\n    \"glove_file = \\\"glove.6B.100d.txt\\\"\\n\",\n    \"\\n\",\n    \"with archive.open(glove_file) as f:\\n\",\n    \"    for line in f:\\n\",\n    \"        values = line.split()\\n\",\n    \"        word = values[0].decode(\\\"utf-8\\\") \\n\",\n    \"        coefs = np.asarray(values[1:], dtype=\\\"float32\\\")\\n\",\n    \"        embeddings_index[word] = coefs\\n\",\n    \"\\n\",\n    \"embedding_matrix = np.zeros((len(tokenizer.word_index) + 1, hparams[\\\"embedding_dim\\\"]))\\n\",\n    \"num_words_in_embedding = 0\\n\",\n    \"for word, i in tokenizer.word_index.items():\\n\",\n    \"    embedding_vector = embeddings_index.get(word)\\n\",\n    \"    if embedding_vector is not None:\\n\",\n    \"        num_words_in_embedding += 1\\n\",\n    \"        embedding_matrix[i] = embedding_vector\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"t9NVp-_eWs9V\"\n   },\n   \"source\": [\n    \"We are now ready to specify the `Keras` layers. We write a function to create a new model, which we will invoke whenever we wish to train a new model.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"_f_DhA6OWs9W\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def create_model():\\n\",\n    \"    model = keras.Sequential()\\n\",\n    \"\\n\",\n    \"    # Embedding layer.\\n\",\n    \"    embedding_layer = layers.Embedding(\\n\",\n    \"        embedding_matrix.shape[0],\\n\",\n    \"        embedding_matrix.shape[1],\\n\",\n    \"        weights=[embedding_matrix],\\n\",\n    \"        input_length=hparams[\\\"max_sequence_length\\\"],\\n\",\n    \"        trainable=hparams['embedding_trainable'])\\n\",\n    \"    model.add(embedding_layer)\\n\",\n    \"\\n\",\n    \"    # Convolution layers.\\n\",\n    \"    for filter_size, kernel_size, pool_size in zip(\\n\",\n    \"        hparams['cnn_filter_sizes'], hparams['cnn_kernel_sizes'],\\n\",\n    \"        hparams['cnn_pooling_sizes']):\\n\",\n    \"\\n\",\n    \"        conv_layer = layers.Conv1D(\\n\",\n    \"            filter_size, kernel_size, activation='relu', padding='same')\\n\",\n    \"        model.add(conv_layer)\\n\",\n    \"\\n\",\n    \"        pooled_layer = layers.MaxPooling1D(pool_size, padding='same')\\n\",\n    \"        model.add(pooled_layer)\\n\",\n    \"\\n\",\n    \"    # Add a flatten layer, a fully-connected layer and an output layer.\\n\",\n    \"    model.add(layers.Flatten())\\n\",\n    \"    model.add(layers.Dense(128, activation='relu'))\\n\",\n    \"    model.add(layers.Dense(1))\\n\",\n    \"    \\n\",\n    \"    return model\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"CwcqYITBN7bW\"\n   },\n   \"source\": [\n    \"We also define a method to set random seeds. This is done to ensure reproducible results.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"C_1nsXntN98C\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def set_seeds():\\n\",\n    \"  np.random.seed(121212)\\n\",\n    \"  tf.compat.v1.set_random_seed(212121)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"X-_fKjDtWs9Y\"\n   },\n   \"source\": [\n    \"## Fairness indicators\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"k009haGaWs9Z\"\n   },\n   \"source\": [\n    \"We also write functions to plot fairness indicators.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"B9ZgGCAs8V-I\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def create_examples(labels, predictions, groups, group_names):\\n\",\n    \"  # Returns tf.examples with given labels, predictions, and group information.  \\n\",\n    \"  examples = []\\n\",\n    \"  sigmoid = lambda x: 1/(1 + np.exp(-x)) \\n\",\n    \"  for ii in range(labels.shape[0]):\\n\",\n    \"    example = tf.train.Example()\\n\",\n    \"    example.features.feature['toxicity'].float_list.value.append(\\n\",\n    \"        labels[ii][0])\\n\",\n    \"    example.features.feature['prediction'].float_list.value.append(\\n\",\n    \"        sigmoid(predictions[ii][0]))  # predictions need to be in [0, 1].\\n\",\n    \"    for jj in range(groups.shape[1]):\\n\",\n    \"      example.features.feature[group_names[jj]].bytes_list.value.append(\\n\",\n    \"          b'Yes' if groups[ii, jj] else b'No')\\n\",\n    \"    examples.append(example)\\n\",\n    \"  return examples\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"vESL-3dU9iiG\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def evaluate_results(labels, predictions, groups, group_names):\\n\",\n    \"  # Evaluates fairness indicators for given labels, predictions and group\\n\",\n    \"  # membership info.\\n\",\n    \"  examples = create_examples(labels, predictions, groups, group_names)\\n\",\n    \"\\n\",\n    \"  # Create feature map for labels, predictions and each group.\\n\",\n    \"  feature_map = {\\n\",\n    \"      'prediction': tf.io.FixedLenFeature([], tf.float32),\\n\",\n    \"      'toxicity': tf.io.FixedLenFeature([], tf.float32),\\n\",\n    \"  }\\n\",\n    \"  for group in group_names:\\n\",\n    \"    feature_map[group] = tf.io.FixedLenFeature([], tf.string)\\n\",\n    \"\\n\",\n    \"  # Serialize the examples.\\n\",\n    \"  serialized_examples = [e.SerializeToString() for e in examples]\\n\",\n    \"\\n\",\n    \"  BASE_DIR = tempfile.gettempdir()\\n\",\n    \"  OUTPUT_DIR = os.path.join(BASE_DIR, 'output')\\n\",\n    \"\\n\",\n    \"  with beam.Pipeline() as pipeline:\\n\",\n    \"    model_agnostic_config = agnostic_predict.ModelAgnosticConfig(\\n\",\n    \"              label_keys=['toxicity'],\\n\",\n    \"              prediction_keys=['prediction'],\\n\",\n    \"              feature_spec=feature_map)\\n\",\n    \"    \\n\",\n    \"    slices = [tfma.slicer.SingleSliceSpec()]\\n\",\n    \"    for group in group_names:\\n\",\n    \"      slices.append(\\n\",\n    \"          tfma.slicer.SingleSliceSpec(columns=[group]))\\n\",\n    \"\\n\",\n    \"    extractors = [\\n\",\n    \"            model_agnostic_extractor.ModelAgnosticExtractor(\\n\",\n    \"                model_agnostic_config=model_agnostic_config),\\n\",\n    \"            tfma.extractors.slice_key_extractor.SliceKeyExtractor(slices)\\n\",\n    \"        ]\\n\",\n    \"\\n\",\n    \"    metrics_callbacks = [\\n\",\n    \"      tfma.post_export_metrics.fairness_indicators(\\n\",\n    \"          thresholds=[0.5],\\n\",\n    \"          target_prediction_keys=['prediction'],\\n\",\n    \"          labels_key='toxicity'),\\n\",\n    \"      tfma.post_export_metrics.example_count()]\\n\",\n    \"\\n\",\n    \"    # Create a model agnostic aggregator.\\n\",\n    \"    eval_shared_model = tfma.types.EvalSharedModel(\\n\",\n    \"        add_metrics_callbacks=metrics_callbacks,\\n\",\n    \"        construct_fn=model_agnostic_evaluate_graph.make_construct_fn(\\n\",\n    \"            add_metrics_callbacks=metrics_callbacks,\\n\",\n    \"            config=model_agnostic_config))\\n\",\n    \"\\n\",\n    \"    # Run Model Agnostic Eval.\\n\",\n    \"    _ = (\\n\",\n    \"        pipeline\\n\",\n    \"        | beam.Create(serialized_examples)\\n\",\n    \"        | 'ExtractEvaluateAndWriteResults' >>\\n\",\n    \"          tfma.ExtractEvaluateAndWriteResults(\\n\",\n    \"              eval_shared_model=eval_shared_model,\\n\",\n    \"              output_path=OUTPUT_DIR,\\n\",\n    \"              extractors=extractors,\\n\",\n    \"              compute_confidence_intervals=True\\n\",\n    \"          )\\n\",\n    \"    )\\n\",\n    \"\\n\",\n    \"  fairness_ind_result = tfma.load_eval_result(output_path=OUTPUT_DIR)\\n\",\n    \"\\n\",\n    \"  # Also evaluate accuracy of the model.\\n\",\n    \"  accuracy = np.mean(labels == (predictions > 0.0))\\n\",\n    \"\\n\",\n    \"  return fairness_ind_result, accuracy\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"W3Sp7mpsWs9f\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def plot_fairness_indicators(eval_result, title):\\n\",\n    \"  fairness_ind_result, accuracy = eval_result\\n\",\n    \"  display(HTML(\\\"<center><h2>\\\" + title + \\n\",\n    \"               \\\" (Accuracy = %.2f%%)\\\" % (accuracy * 100) + \\\"</h2></center>\\\"))\\n\",\n    \"  widget_view.render_fairness_indicator(fairness_ind_result)\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"WqLdtgI42fxb\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def plot_multi_fairness_indicators(multi_eval_results):\\n\",\n    \" \\n\",\n    \"  multi_results = {}\\n\",\n    \"  multi_accuracy = {}\\n\",\n    \"  for title, (fairness_ind_result, accuracy) in multi_eval_results.items():\\n\",\n    \"    multi_results[title] = fairness_ind_result\\n\",\n    \"    multi_accuracy[title] = accuracy\\n\",\n    \"  \\n\",\n    \"  title_str = \\\"<center><h2>\\\"\\n\",\n    \"  for title in multi_eval_results.keys():\\n\",\n    \"      title_str+=title + \\\" (Accuracy = %.2f%%)\\\" % (multi_accuracy[title] * 100) + \\\"; \\\"\\n\",\n    \"  title_str=title_str[:-2]\\n\",\n    \"  title_str+=\\\"</h2></center>\\\"\\n\",\n    \"  # fairness_ind_result, accuracy = eval_result\\n\",\n    \"  display(HTML(title_str))\\n\",\n    \"  widget_view.render_fairness_indicator(multi_eval_results=multi_results)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"8aWNc4CdWs9h\"\n   },\n   \"source\": [\n    \"## Train unconstrained model\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"DuSA8qL7Ws9i\"\n   },\n   \"source\": [\n    \"For the first model we train, we optimize a simple cross-entropy loss *without* any constraints..\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"0g50bauHWs9j\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Set random seed for reproducible results.\\n\",\n    \"set_seeds()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"YsCoHMG_iIzc\"\n   },\n   \"source\": [\n    \"**Note**: The following code cell can take ~8 minutes to run.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"tamJiG3FiDYW\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Optimizer and loss.\\n\",\n    \"optimizer = tf.keras.optimizers.Adam(learning_rate=hparams[\\\"learning_rate\\\"])\\n\",\n    \"loss = lambda y_true, y_pred: tf.keras.losses.binary_crossentropy(\\n\",\n    \"    y_true, y_pred, from_logits=True)\\n\",\n    \"\\n\",\n    \"# Create, compile and fit model.\\n\",\n    \"model_unconstrained = create_model()\\n\",\n    \"model_unconstrained.compile(optimizer=optimizer, loss=loss)\\n\",\n    \"\\n\",\n    \"model_unconstrained.fit(\\n\",\n    \"    x=text_train, y=labels_train, batch_size=hparams[\\\"batch_size\\\"], epochs=2)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"p7AvIdktWs9t\"\n   },\n   \"source\": [\n    \"Having trained the unconstrained model, we plot various evaluation metrics for the model on the test set.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"tHV40_21lRL6\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"scores_unconstrained_test = model_unconstrained.predict(text_test)\\n\",\n    \"eval_result_unconstrained = evaluate_results(\\n\",\n    \"    labels_test, scores_unconstrained_test, groups_test, group_names)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"AJpRuN0EOeyG\"\n   },\n   \"source\": [\n    \"As explained above, we are concentrating on the false positive rate. In their current version (0.1.2), Fairness Indicators select false negative rate by default. After running the line below, go ahead and deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"2fwNpfou4yvP\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"plot_fairness_indicators(eval_result_unconstrained, \\\"Unconstrained\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"J3TbAenkGM7P\"\n   },\n   \"source\": [\n    \"While the overall false positive rate is less than 2%, the false positive rate on the sexuality-related comments is significantly higher. This is because the sexuality group is very small in size, and has a disproportionately higher fraction of comments annotated as toxic. Hence, training a model without constraints results in the model believing that sexuality-related terms are a strong indicator of toxicity.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"KmxyAo9hWs9w\"\n   },\n   \"source\": [\n    \"## Train with constraints on false positive rates\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"l3dYUchIWs9w\"\n   },\n   \"source\": [\n    \"To avoid large differences in false positive rates across different groups, we \\n\",\n    \"next train a model by constraining the false positive rates for each group to be within a desired limit. In this case, we will optimize the error rate of the model subject to the *per-group false positive rates being lesser or equal to 2%*.\\n\",\n    \"\\n\",\n    \"Training on minibatches with per-group constraints can be challenging for this dataset, however, as the groups we wish to constraint are all small in size, and it's likely that the individual minibatches contain very few examples from each group. Hence the gradients we compute during training will be noisy, and result in the model converging very slowly. \\n\",\n    \"\\n\",\n    \"To mitigate this problem, we recommend using two streams of minibatches, with the first stream formed as before from the entire training set, and the second stream formed solely from the sensitive group examples. We will compute the objective using minibatches from the first stream and the per-group constraints using minibatches from the second stream. Because the batches from the second stream are likely to contain a larger number of examples from each group, we expect our updates to be less noisy.\\n\",\n    \"\\n\",\n    \"We create separate features, labels and groups tensors to hold the minibatches from the two streams.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"vMuuTOEOWs9x\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Set random seed.\\n\",\n    \"set_seeds()\\n\",\n    \"\\n\",\n    \"# Features tensors.\\n\",\n    \"batch_shape = (hparams[\\\"batch_size\\\"], hparams['max_sequence_length'])\\n\",\n    \"features_tensor = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x')\\n\",\n    \"features_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x_sen')\\n\",\n    \"\\n\",\n    \"# Labels tensors.\\n\",\n    \"batch_shape = (hparams[\\\"batch_size\\\"], 1)\\n\",\n    \"labels_tensor = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels')\\n\",\n    \"labels_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels_sen')\\n\",\n    \"\\n\",\n    \"# Groups tensors.\\n\",\n    \"batch_shape = (hparams[\\\"batch_size\\\"], num_groups)\\n\",\n    \"groups_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='groups_sen')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"-wh26V7nWs9z\"\n   },\n   \"source\": [\n    \"We instantiate a new model, and compute predictions for minibatches from the two streams.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"kawyrkQIWs9z\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Create model, and separate prediction functions for the two streams. \\n\",\n    \"# For the predictions, we use a nullary function returning a Tensor to support eager mode.\\n\",\n    \"model_constrained = create_model()\\n\",\n    \"\\n\",\n    \"def predictions():\\n\",\n    \"  return model_constrained(features_tensor)\\n\",\n    \"\\n\",\n    \"def predictions_sen():\\n\",\n    \"  return model_constrained(features_tensor_sen)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UG9t7dw1Ws91\"\n   },\n   \"source\": [\n    \"We then set up a constrained optimization problem with the error rate as the objective and with constraints on the per-group false positive rate.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"EhKAMGSJWs93\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"epsilon = 0.02  # Desired false-positive rate threshold.\\n\",\n    \"\\n\",\n    \"# Set up separate contexts for the two minibatch streams.\\n\",\n    \"context = tfco.rate_context(predictions, lambda:labels_tensor)\\n\",\n    \"context_sen = tfco.rate_context(predictions_sen, lambda:labels_tensor_sen)\\n\",\n    \"\\n\",\n    \"# Compute the objective using the first stream.\\n\",\n    \"objective = tfco.error_rate(context)\\n\",\n    \"\\n\",\n    \"# Compute the constraint using the second stream.\\n\",\n    \"# Subset the examples belonging to the \\\"sexuality\\\" group from the second stream \\n\",\n    \"# and add a constraint on the group's false positive rate.\\n\",\n    \"context_sen_subset = context_sen.subset(lambda: groups_tensor_sen[:, 0] > 0)\\n\",\n    \"constraint = [tfco.false_positive_rate(context_sen_subset) <= epsilon]\\n\",\n    \"\\n\",\n    \"# Create a rate minimization problem.\\n\",\n    \"problem = tfco.RateMinimizationProblem(objective, constraint)\\n\",\n    \"\\n\",\n    \"# Set up a constrained optimizer.\\n\",\n    \"optimizer = tfco.ProxyLagrangianOptimizerV2(\\n\",\n    \"    optimizer=tf.keras.optimizers.Adam(learning_rate=hparams[\\\"learning_rate\\\"]),\\n\",\n    \"    num_constraints=problem.num_constraints)\\n\",\n    \"\\n\",\n    \"# List of variables to optimize include the model weights, \\n\",\n    \"# and the trainable variables from the rate minimization problem and \\n\",\n    \"# the constrained optimizer.\\n\",\n    \"var_list = (model_constrained.trainable_weights + list(problem.trainable_variables) +\\n\",\n    \"            optimizer.trainable_variables())\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"CoFWd8wMWs94\"\n   },\n   \"source\": [\n    \"We are ready to train the model. We maintain a separate counter for the two minibatch streams. Every time we perform a gradient update, we will have to copy the minibatch contents from the first stream to the tensors `features_tensor` and `labels_tensor`, and the minibatch contents from the second stream to the tensors `features_tensor_sen`, `labels_tensor_sen` and `groups_tensor_sen`.\\n\",\n    \"\\n\",\n    \"**Note**: The following code cell may take ~12 minutes to run.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"zbXohC6vWs95\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# Indices of sensitive group members.\\n\",\n    \"protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]\\n\",\n    \"\\n\",\n    \"num_examples = text_train.shape[0]\\n\",\n    \"num_examples_sen = protected_group_indices.shape[0]\\n\",\n    \"batch_size = hparams[\\\"batch_size\\\"]\\n\",\n    \"\\n\",\n    \"# Number of steps needed for one epoch over the training sample.\\n\",\n    \"num_steps = int(num_examples / batch_size)\\n\",\n    \"\\n\",\n    \"start_time = time.time()\\n\",\n    \"\\n\",\n    \"# Loop over minibatches.\\n\",\n    \"for batch_index in range(num_steps):\\n\",\n    \"    # Indices for current minibatch in the first stream.\\n\",\n    \"    batch_indices = np.arange(\\n\",\n    \"        batch_index * batch_size, (batch_index + 1) * batch_size)\\n\",\n    \"    batch_indices = [ind % num_examples for ind in batch_indices]\\n\",\n    \"\\n\",\n    \"    # Indices for current minibatch in the second stream.\\n\",\n    \"    batch_indices_sen = np.arange(\\n\",\n    \"        batch_index * batch_size, (batch_index + 1) * batch_size)\\n\",\n    \"    batch_indices_sen = [protected_group_indices[ind % num_examples_sen]\\n\",\n    \"                         for ind in batch_indices_sen]\\n\",\n    \"\\n\",\n    \"    # Assign features, labels, groups from the minibatches to the respective tensors.\\n\",\n    \"    features_tensor.assign(text_train[batch_indices, :])\\n\",\n    \"    labels_tensor.assign(labels_train[batch_indices])\\n\",\n    \"\\n\",\n    \"    features_tensor_sen.assign(text_train[batch_indices_sen, :])\\n\",\n    \"    labels_tensor_sen.assign(labels_train[batch_indices_sen])\\n\",\n    \"    groups_tensor_sen.assign(groups_train[batch_indices_sen, :])\\n\",\n    \"\\n\",\n    \"    # Gradient update.\\n\",\n    \"    optimizer.minimize(problem, var_list=var_list)\\n\",\n    \"    \\n\",\n    \"    # Record and print batch training stats every 10 steps.\\n\",\n    \"    if (batch_index + 1) % 10 == 0 or batch_index in (0, num_steps - 1):\\n\",\n    \"      hinge_loss = problem.objective()\\n\",\n    \"      max_violation = max(problem.constraints())\\n\",\n    \"\\n\",\n    \"      elapsed_time = time.time() - start_time\\n\",\n    \"      sys.stdout.write(\\n\",\n    \"          \\\"\\\\rStep %d / %d: Elapsed time = %ds, Loss = %.3f, Violation = %.3f\\\" % \\n\",\n    \"          (batch_index + 1, num_steps, elapsed_time, hinge_loss, max_violation))\\n\",\n    \"    \"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"DdJfplDpWs97\"\n   },\n   \"source\": [\n    \"Having trained the constrained model, we plot various evaluation metrics for the model on the test set.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"jEerPEwLhfTN\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"scores_constrained_test = model_constrained.predict(text_test)\\n\",\n    \"eval_result_constrained = evaluate_results(\\n\",\n    \"    labels_test, scores_constrained_test, groups_test, group_names)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"ustp5z7xQnHI\"\n   },\n   \"source\": [\n    \"As with last time, remember to select false_positive_rate.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"ztK7iM4LjKmT\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"plot_fairness_indicators(eval_result_constrained, \\\"Constrained\\\")\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"6P6dxSg5_mTu\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"multi_results = {\\n\",\n    \"    'constrained':eval_result_constrained,\\n\",\n    \"    'unconstrained':eval_result_unconstrained,\\n\",\n    \"}\\n\",\n    \"plot_multi_fairness_indicators(multi_eval_results=multi_results)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"EfKo5O3QWs9-\"\n   },\n   \"source\": [\n    \"As we can see from the Fairness Indicators, compared to the unconstrained model the constrained model yields significantly lower false positive rates for the sexuality-related comments, and does so with only a slight dip in the overall accuracy.\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"colab\": {\n   \"collapsed_sections\": [],\n   \"name\": \"Fairness Indicators TFCO Wiki Comments Case Study.ipynb\",\n   \"private_outputs\": true,\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"_E4uORykIpG4\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"aBT221yVIujn\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"aalPefrUUplk\"\n   },\n   \"source\": [\n    \"# Fairness Indicators TensorBoard Plugin Example Colab\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"fFTJpyFlI-uI\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_indicators_TensorBoard_Plugin_Example_Colab\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UZ48WFLwbCL6\"\n   },\n   \"source\": [\n    \"##Overview\\n\",\n    \"\\n\",\n    \"In this activity, you'll use [Fairness Indicators for TensorBoard](https://github.com/tensorflow/tensorboard/tree/master/docs/fairness-indicators.md). With the plugin, you can visualize fairness evaluations for your runs and easily compare performance across groups.\\n\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"u33JXdluZ2lG\"\n   },\n   \"source\": [\n    \"# Importing\\n\",\n    \"\\n\",\n    \"Run the following code to install the required libraries.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"EoRNffG599XP\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!pip install -q -U pip==20.2\\n\",\n    \"\\n\",\n    \"!pip install fairness_indicators 'absl-py<0.9,>=0.7'\\n\",\n    \"!pip install google-api-python-client==1.8.3\\n\",\n    \"!pip install tensorboard-plugin-fairness-indicators\\n\",\n    \"!pip install tensorflow-serving-api==2.17.1\\n\",\n    \"!pip install tensorflow-model-analysis\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"mglfaM4_mtIk\"\n   },\n   \"source\": [\n    \"**Restart the runtime.** After the runtime is restarted, continue with following cells without running previous cell again.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"sFZJ8f_M7mlc\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# %tf.disable_v2_behavior()\\t# Uncomment this line if running in Google Colab.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"B8dlyTyiTe-9\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import datetime\\n\",\n    \"import os\\n\",\n    \"import tempfile\\n\",\n    \"from tensorboard_plugin_fairness_indicators import summary_v2\\n\",\n    \"import tensorflow.compat.v1 as tf\\n\",\n    \"import numpy as np\\n\",\n    \"from tensorflow import keras\\n\",\n    \"from google.protobuf import text_format\\n\",\n    \"\\n\",\n    \"# example_model.py is provided in fairness_indicators package to train and\\n\",\n    \"# evaluate an example model.\\n\",\n    \"from fairness_indicators import example_model\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"\\n\",\n    \"tf.compat.v1.enable_eager_execution()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"TsplOJGqWCf5\"\n   },\n   \"source\": [\n    \"# Data and Constants\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"NdLBi6tN5i7I\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"# To know about dataset, check Fairness Indicators Example Colab at:\\n\",\n    \"# https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_Example_Colab.ipynb\\n\",\n    \"\\n\",\n    \"train_tf_file = tf.keras.utils.get_file('train.tf', 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')\\n\",\n    \"validate_tf_file = tf.keras.utils.get_file('validate.tf', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')\\n\",\n    \"\\n\",\n    \"BASE_DIR = tempfile.gettempdir()\\n\",\n    \"TEXT_FEATURE = 'comment_text'\\n\",\n    \"LABEL = 'toxicity'\\n\",\n    \"FEATURE_MAP = {\\n\",\n    \"    # Label:\\n\",\n    \"    LABEL: tf.io.FixedLenFeature([], tf.float32),\\n\",\n    \"    # Text:\\n\",\n    \"    TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\\n\",\n    \"\\n\",\n    \"    # Identities:\\n\",\n    \"    'sexual_orientation': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'gender': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'religion': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'race': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'disability': tf.io.VarLenFeature(tf.string),\\n\",\n    \"}\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"mfbgerCsEOmN\"\n   },\n   \"source\": [\n    \"# Train the Model\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"YwoC-dzEDid3\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"model_dir = os.path.join(BASE_DIR, 'train',\\n\",\n    \"                         datetime.datetime.now().strftime('%Y%m%d-%H%M%S'))\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"VqjEYySbYaX5\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"classifier = example_model.get_example_model(example_model.TEXT_FEATURE)\\n\",\n    \"classifier.compile(optimizer=keras.optimizers.Adam(), loss='mse')\\n\",\n    \"\\n\",\n    \"# Read the data from the training file\\n\",\n    \"data = []\\n\",\n    \"dataset = tf.data.Dataset.list_files(train_tf_file, shuffle=False)\\n\",\n    \"dataset = dataset.flat_map(tf.data.TFRecordDataset)\\n\",\n    \"for raw_record in dataset.take(1):\\n\",\n    \"  example = tf.train.Example()\\n\",\n    \"  example.ParseFromString(raw_record.numpy())\\n\",\n    \"  data.append(example)\\n\",\n    \"\\n\",\n    \"classifier.fit(\\n\",\n    \"    tf.constant([e.SerializeToString() for e in data]),\\n\",\n    \"    np.array([\\n\",\n    \"        e.features.feature[example_model.LABEL].float_list.value[:][0]\\n\",\n    \"        for e in data\\n\",\n    \"    ]),\\n\",\n    \")\\n\",\n    \"classifier.save(model_dir, save_format='tf')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jTPqije9Eg5b\"\n   },\n   \"source\": [\n    \"# Run TensorFlow Model Analysis with Fairness Indicators\\n\",\n    \"This step might take 2 to 5 minutes.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"QLjiy5VCzlRw\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\\n\",\n    \"\\n\",\n    \"eval_config = text_format.Parse(\\n\",\n    \"    \\\"\\\"\\\"\\n\",\n    \"    model_specs {\\n\",\n    \"      signature_name: \\\"serving_default\\\"\\n\",\n    \"      prediction_key: \\\"predictions\\\" # placeholder\\n\",\n    \"      label_key: \\\"toxicity\\\" # placeholder\\n\",\n    \"    }\\n\",\n    \"    slicing_specs {}\\n\",\n    \"    slicing_specs {\\n\",\n    \"      feature_keys: [\\\"gender\\\"]\\n\",\n    \"    }\\n\",\n    \"    metrics_specs {\\n\",\n    \"      metrics {\\n\",\n    \"        class_name: \\\"ExampleCount\\\"\\n\",\n    \"      }\\n\",\n    \"      metrics {\\n\",\n    \"        class_name: \\\"FairnessIndicators\\\"\\n\",\n    \"      }\\n\",\n    \"    }\\n\",\n    \"\\\"\\\"\\\",\\n\",\n    \"    tfma.EvalConfig(),\\n\",\n    \")\\n\",\n    \"\\n\",\n    \"tfma_eval_result_path = os.path.join(model_dir, 'tfma_eval_result')\\n\",\n    \"example_model.evaluate_model(\\n\",\n    \"    model_dir,\\n\",\n    \"    validate_tf_file,\\n\",\n    \"    tfma_eval_result_path,\\n\",\n    \"    eval_config,\\n\",\n    \")\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"U1ROnulYc8Ub\"\n   },\n   \"source\": [\n    \"# Visualize Fairness Indicators in TensorBoard\\n\",\n    \"\\n\",\n    \"\\n\",\n    \"Below you will visualize Fairness Indicators in Tensorboard and compare performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the drop down menus at the top of the visualization. You can also select different evaluation runs using the drop down menu at the top-left corner.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"zCV-Jo0xda6g\"\n   },\n   \"source\": [\n    \"## Write Fairness Indicators Summary\\n\",\n    \"Write summary file containing all required information to visualize Fairness Indicators in TensorBoard.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"JNaNhTCTAMHm\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import tensorflow.compat.v2 as tf2\\n\",\n    \"\\n\",\n    \"writer = tf2.summary.create_file_writer(\\n\",\n    \"    os.path.join(model_dir, 'fairness_indicators'))\\n\",\n    \"with writer.as_default():\\n\",\n    \"  summary_v2.FairnessIndicators(tfma_eval_result_path, step=1)\\n\",\n    \"writer.close()\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"MB2Gfm9BdXVY\"\n   },\n   \"source\": [\n    \"## Launch TensorBoard\\n\",\n    \"Navigate to \\\"Fairness Indicators\\\" tab to visualize Fairness Indicators.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"UiHhDWu8tyEI\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"%load_ext tensorboard\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"ix6d718udWsK\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"%tensorboard --logdir=$model_dir\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"accelerator\": \"GPU\",\n  \"colab\": {\n   \"name\": \"Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\",\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb",
    "content": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Tce3stUlHN0L\"\n   },\n   \"source\": [\n    \"##### Copyright 2020 The TensorFlow Authors.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"cellView\": \"form\",\n    \"id\": \"tuOe1ymfHZPu\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n    \"# you may not use this file except in compliance with the License.\\n\",\n    \"# You may obtain a copy of the License at\\n\",\n    \"#\\n\",\n    \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n    \"#\\n\",\n    \"# Unless required by applicable law or agreed to in writing, software\\n\",\n    \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n    \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n    \"# See the License for the specific language governing permissions and\\n\",\n    \"# limitations under the License.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"aalPefrUUplk\"\n   },\n   \"source\": [\n    \"# Fairness Indicators on TF-Hub Text Embeddings\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"MfBg1C5NB3X0\"\n   },\n   \"source\": [\n    \"<div class=\\\"buttons-wrapper\\\">\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\">\\n\",\n    \"      View on TensorFlow.org\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\">\\n\",\n    \"      Run in Google Colab\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" target=\\\"_blank\\\" href=\\n\",\n    \"     \\\"https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img width=\\\"32px\\\" src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\">\\n\",\n    \"      View source on GitHub\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"  <a class=\\\"md-button\\\" href=\\n\",\n    \"     \\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb\\\">\\n\",\n    \"    <div class=\\\"buttons-content\\\">\\n\",\n    \"      <img src=\\n\",\n    \"\\t   \\\"https://www.tensorflow.org/images/download_logo_32px.png\\\">\\n\",\n    \"      Download notebook\\n\",\n    \"    </div>\\n\",\n    \"  </a>\\n\",\n    \"</div>\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"w0zsksbydmNp\"\n   },\n   \"source\": [\n    \"In this tutorial, you will learn how to use [Fairness Indicators](https://github.com/tensorflow/fairness-indicators) to evaluate embeddings from [TF Hub](https://www.tensorflow.org/hub). This notebook uses the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"u33JXdluZ2lG\"\n   },\n   \"source\": [\n    \"## Setup\\n\",\n    \"\\n\",\n    \"Install the required libraries.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"BAUEkqYlzP3W\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"!pip install -q -U pip==20.2\\n\",\n    \"\\n\",\n    \"!pip install fairness-indicators \\\\\\n\",\n    \"  \\\"absl-py==0.12.0\\\" \\\\\\n\",\n    \"  \\\"pyarrow==10.0.1\\\" \\\\\\n\",\n    \"  \\\"apache-beam==2.50.0\\\" \\\\\\n\",\n    \"  \\\"avro-python3==1.9.1\\\"\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"e6pe8c6L7kCW\"\n   },\n   \"source\": [\n    \"Import other required libraries.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"B8dlyTyiTe-9\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"import os\\n\",\n    \"import tempfile\\n\",\n    \"import apache_beam as beam\\n\",\n    \"from datetime import datetime\\n\",\n    \"import tensorflow as tf\\n\",\n    \"import tensorflow_hub as hub\\n\",\n    \"import tensorflow_model_analysis as tfma\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.view import widget_view\\n\",\n    \"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\\n\",\n    \"from fairness_indicators import example_model\\n\",\n    \"from fairness_indicators.tutorial_utils import util\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"Xz4PcI0hSVcq\"\n   },\n   \"source\": [\n    \"### Dataset\\n\",\n    \"\\n\",\n    \"In this notebook, you work with the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) which contains approximately 2 million public comments made public by the [Civil Comments platform](https://github.com/reaktivstudios/civil-comments) in 2017 for ongoing research. This effort was sponsored by Jigsaw, who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.\\n\",\n    \"\\n\",\n    \"Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"9ekzb7vVnPCc\"\n   },\n   \"source\": [\n    \"### Prepare the data\\n\",\n    \"\\n\",\n    \"TensorFlow parses features from data using [`tf.io.FixedLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature) and [`tf.io.VarLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/VarLenFeature). Map out the input feature, output feature, and all other slicing features of interest.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"n4_nXQDykX6W\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"BASE_DIR = tempfile.gettempdir()\\n\",\n    \"\\n\",\n    \"# The input and output features of the classifier\\n\",\n    \"TEXT_FEATURE = 'comment_text'\\n\",\n    \"LABEL = 'toxicity'\\n\",\n    \"\\n\",\n    \"FEATURE_MAP = {\\n\",\n    \"    # input and output features\\n\",\n    \"    LABEL: tf.io.FixedLenFeature([], tf.float32),\\n\",\n    \"    TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\\n\",\n    \"\\n\",\n    \"    # slicing features\\n\",\n    \"    'sexual_orientation': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'gender': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'religion': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'race': tf.io.VarLenFeature(tf.string),\\n\",\n    \"    'disability': tf.io.VarLenFeature(tf.string)\\n\",\n    \"}\\n\",\n    \"\\n\",\n    \"IDENTITY_TERMS = ['gender', 'sexual_orientation', 'race', 'religion', 'disability']\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"CeUtnaT49Doq\"\n   },\n   \"source\": [\n    \"By default, the notebook downloads a preprocessed version of this dataset, but\\n\",\n    \"you may use the original dataset and re-run the processing steps if\\n\",\n    \"desired.\\n\",\n    \"\\n\",\n    \"In the original dataset, each comment is labeled with the percentage\\n\",\n    \"of raters who believed that a comment corresponds to a particular\\n\",\n    \"identity. For example, a comment might be labeled with the following:\\n\",\n    \"`{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8,\\n\",\n    \"homosexual_gay_or_lesbian: 1.0 }`.\\n\",\n    \"\\n\",\n    \"The processing step groups identity by category (gender,\\n\",\n    \"sexual_orientation, etc.) and removes identities with a score less\\n\",\n    \"than 0.5. So the example above would be converted to the following:\\n\",\n    \"of raters who believed that a comment corresponds to a particular\\n\",\n    \"identity. For example, the comment above would be labeled with the\\n\",\n    \"following:\\n\",\n    \"`{ gender: [female], sexual_orientation: [heterosexual,\\n\",\n    \"homosexual_gay_or_lesbian] }`\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FHxa31VX9eP2\"\n   },\n   \"source\": [\n    \"Download the dataset.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"NUmSmqYGS0n8\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"download_original_data = False #@param {type:\\\"boolean\\\"}\\n\",\n    \"\\n\",\n    \"if download_original_data:\\n\",\n    \"  train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',\\n\",\n    \"                                          'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')\\n\",\n    \"  validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',\\n\",\n    \"                                             'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')\\n\",\n    \"\\n\",\n    \"  # The identity terms list will be grouped together by their categories\\n\",\n    \"  # (see 'IDENTITY_COLUMNS') on threshold 0.5. Only the identity term column,\\n\",\n    \"  # text column and label column will be kept after processing.\\n\",\n    \"  train_tf_file = util.convert_comments_data(train_tf_file)\\n\",\n    \"  validate_tf_file = util.convert_comments_data(validate_tf_file)\\n\",\n    \"\\n\",\n    \"else:\\n\",\n    \"  train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',\\n\",\n    \"                                          'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')\\n\",\n    \"  validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',\\n\",\n    \"                                             'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"zz1NLR5Uu3oQ\"\n   },\n   \"source\": [\n    \"## Create a TensorFlow Model Analysis Pipeline\\n\",\n    \"\\n\",\n    \"The Fairness Indicators library operates on [TensorFlow Model Analysis (TFMA) models](https://tensorflow.github.io/model-analysis/get_started). TFMA models wrap TensorFlow models with additional functionality to evaluate and visualize their results. The actual evaluation occurs inside of an [Apache Beam pipeline](https://beam.apache.org/documentation/programming-guide/).\\n\",\n    \"\\n\",\n    \"The steps you follow to create a TFMA pipeline are:\\n\",\n    \"1. Build a TensorFlow model\\n\",\n    \"2. Build a TFMA model on top of the TensorFlow model\\n\",\n    \"3. Run the model analysis in an orchestrator. The example model in this notebook uses Apache Beam as the orchestrator.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"7nSvu4IUCigW\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"def embedding_fairness_result(embedding, identity_term='gender'):\\n\",\n    \"  \\n\",\n    \"  model_dir = os.path.join(BASE_DIR, 'train',\\n\",\n    \"                         datetime.now().strftime('%Y%m%d-%H%M%S'))\\n\",\n    \"\\n\",\n    \"  print(\\\"Training classifier for \\\" + embedding)\\n\",\n    \"  classifier = example_model.train_model(model_dir,\\n\",\n    \"                                         train_tf_file,\\n\",\n    \"                                         LABEL,\\n\",\n    \"                                         TEXT_FEATURE,\\n\",\n    \"                                         FEATURE_MAP,\\n\",\n    \"                                         embedding)\\n\",\n    \"\\n\",\n    \"  # Create a unique path to store the results for this embedding.\\n\",\n    \"  embedding_name = embedding.split('/')[-2]\\n\",\n    \"  eval_result_path = os.path.join(BASE_DIR, 'eval_result', embedding_name)\\n\",\n    \"\\n\",\n    \"  example_model.evaluate_model(classifier,\\n\",\n    \"                               validate_tf_file,\\n\",\n    \"                               eval_result_path,\\n\",\n    \"                               identity_term,\\n\",\n    \"                               LABEL,\\n\",\n    \"                               FEATURE_MAP)\\n\",\n    \"  return tfma.load_eval_result(output_path=eval_result_path)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jTPqije9Eg5b\"\n   },\n   \"source\": [\n    \"## Run TFMA & Fairness Indicators\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"8AvInTNt8Gyn\"\n   },\n   \"source\": [\n    \"### Fairness Indicators Metrics\\n\",\n    \"\\n\",\n    \"Some of the metrics available with Fairness Indicators are:\\n\",\n    \"\\n\",\n    \"* [Negative Rate, False Negative Rate (FNR), and True Negative Rate (TNR)](https://en.wikipedia.org/wiki/False_positives_and_false_negatives#False_positive_and_false_negative_rates)\\n\",\n    \"* [Positive Rate, False Positive Rate (FPR), and True Positive Rate (TPR)](https://en.wikipedia.org/wiki/False_positives_and_false_negatives#False_positive_and_false_negative_rates)\\n\",\n    \"* [Accuracy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy)\\n\",\n    \"* [Precision and Recall](https://en.wikipedia.org/wiki/Precision_and_recall)\\n\",\n    \"* [Precision-Recall AUC](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/AUC)\\n\",\n    \"* [ROC AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"LGXCFtScblYt\"\n   },\n   \"source\": [\n    \"### Text Embeddings\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"1CI-1M5qXGjG\"\n   },\n   \"source\": [\n    \"**[TF-Hub](https://www.tensorflow.org/hub)** provides several **text embeddings**. These embeddings will serve as the feature column for the different models. This tutorial uses the following embeddings:\\n\",\n    \"\\n\",\n    \"* [**random-nnlm-en-dim128**](https://tfhub.dev/google/random-nnlm-en-dim128/1): random text embeddings, this serves as a convenient baseline.\\n\",\n    \"* [**nnlm-en-dim128**](https://tfhub.dev/google/nnlm-en-dim128/1): a text embedding based on [A Neural Probabilistic Language Model](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf). \\n\",\n    \"* [**universal-sentence-encoder**](https://tfhub.dev/google/universal-sentence-encoder/2): a text embedding based on [Universal Sentence Encoder](https://arxiv.org/pdf/1803.11175.pdf).\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"xxq97Qt7itVL\"\n   },\n   \"source\": [\n    \"## Fairness Indicator Results\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"27FX15awixuK\"\n   },\n   \"source\": [\n    \"Compute fairness indicators with the `embedding_fairness_result` pipeline, and then render the results in the Fairness Indicator UI widget with `widget_view.render_fairness_indicator` for all the above embeddings.\\n\",\n    \"\\n\",\n    \"Note: You may need to run the `widget_view.render_fairness_indicator` cells twice for the visualization to be displayed.\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"yEUbZ93y8NCW\"\n   },\n   \"source\": [\n    \"#### Random NNLM\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"DkSuox-Pb6Pz\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"eval_result_random_nnlm = embedding_fairness_result('https://tfhub.dev/google/random-nnlm-en-dim128/1')\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"05xUesz6VpAe\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"widget_view.render_fairness_indicator(eval_result=eval_result_random_nnlm)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"jmKe8Z1b8SBy\"\n   },\n   \"source\": [\n    \"#### NNLM\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"5b8HcTUBckj1\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"eval_result_nnlm = embedding_fairness_result('https://tfhub.dev/google/nnlm-en-dim128/1')\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"n6hasLzFVrDN\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"widget_view.render_fairness_indicator(eval_result=eval_result_nnlm)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"1I4xEDNq8T0X\"\n   },\n   \"source\": [\n    \"#### Universal Sentence Encoder\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"GrdweWRkck8A\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"eval_result_use = embedding_fairness_result('https://tfhub.dev/google/universal-sentence-encoder/2')\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"JBABAkZMVtTK\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"widget_view.render_fairness_indicator(eval_result=eval_result_use)\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"402oTKbap77R\"\n   },\n   \"source\": [\n    \"### Comparing Embeddings\"\n   ]\n  },\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"UgnqwNjpqBuv\"\n   },\n   \"source\": [\n    \"You can also use Fairness Indicators to compare embeddings directly. For example, compare the models generated from the NNLM and USE embeddings.\"\n   ]\n  },\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"metadata\": {\n    \"id\": \"49ECfYWUp7Kk\"\n   },\n   \"outputs\": [],\n   \"source\": [\n    \"widget_view.render_fairness_indicator(multi_eval_results={'nnlm': eval_result_nnlm, 'use': eval_result_use})\"\n   ]\n  }\n ],\n \"metadata\": {\n  \"colab\": {\n   \"collapsed_sections\": [],\n   \"name\": \"Fairness Indicators on TF-Hub Text Embeddings\",\n   \"private_outputs\": true,\n   \"provenance\": [],\n   \"toc_visible\": true\n  },\n  \"kernelspec\": {\n   \"display_name\": \"Python 3 (ipykernel)\",\n   \"language\": \"python\",\n   \"name\": \"python3\"\n  },\n  \"language_info\": {\n   \"codemirror_mode\": {\n    \"name\": \"ipython\",\n    \"version\": 3\n   },\n   \"file_extension\": \".py\",\n   \"mimetype\": \"text/x-python\",\n   \"name\": \"python\",\n   \"nbconvert_exporter\": \"python\",\n   \"pygments_lexer\": \"ipython3\",\n   \"version\": \"3.9.22\"\n  }\n },\n \"nbformat\": 4,\n \"nbformat_minor\": 4\n}\n"
  },
  {
    "path": "docs/tutorials/README.md",
    "content": "The demos listed here are designed to be used with [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb), a free cloud-based environment for Jupyter notebooks. They can also be run in a local Jupyter environment.\n\n## Google Colaboratory\n\nTo run these demos on the cloud, go to `File` -> `Open notebook` in the Colaboratory toolbar, then click on `Github` and paste in the demo's URL. Alternatively, you can use the [Open in Colab](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo?hl=en) Chrome extension to open a notebook directly from GitHub.\n\n## Local Jupyter Environment\n\nTo run these demos on your local machine, you will need to install [Jupyter](https://jupyter.org/install). Then, run the following commands.\n\n    jupyter nbextension enable --py widgetsnbextension --sys-prefix\n    jupyter nbextension install --py --symlink tensorflow_model_analysis --sys-prefix\n    jupyter nbextension enable --py tensorflow_model_analysis --sys-prefix\n\nAfterwards, you can download any of the `.ipynb` files in this directory and run them via `jupyter notebook`.\n"
  },
  {
    "path": "docs/tutorials/_Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb",
    "content": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"ueCj9KW2QTCP\"\n      },\n      \"source\": [\n        \"##### Copyright 2020 The TensorFlow Authors.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"wFk_qMvcQZ8S\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n        \"# you may not use this file except in compliance with the License.\\n\",\n        \"# You may obtain a copy of the License at\\n\",\n        \"#\\n\",\n        \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n        \"#\\n\",\n        \"# Unless required by applicable law or agreed to in writing, software\\n\",\n        \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n        \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n        \"# See the License for the specific language governing permissions and\\n\",\n        \"# limitations under the License.\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"HKYXncPn7mSs\"\n      },\n      \"source\": [\n        \"# Fairness Indicators Lineage Case Study\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"d7A099z02DB6\"\n      },\n      \"source\": [\n        \"\\u003ctable class=\\\"tfo-notebook-buttons\\\" align=\\\"left\\\"\\u003e\\n\",\n        \"  \\u003ctd\\u003e\\n\",\n        \"    \\u003ca target=\\\"_blank\\\" href=\\\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Lineage_Case_Study\\\"\\u003e\\u003cimg src=\\\"https://www.tensorflow.org/images/tf_logo_32px.png\\\" /\\u003eView on TensorFlow.org\\u003c/a\\u003e\\n\",\n        \"  \\u003c/td\\u003e\\n\",\n        \"  \\u003ctd\\u003e\\n\",\n        \"    \\u003ca target=\\\"_blank\\\" href=\\\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\\\"\\u003e\\u003cimg src=\\\"https://www.tensorflow.org/images/colab_logo_32px.png\\\" /\\u003eRun in Google Colab\\u003c/a\\u003e\\n\",\n        \"  \\u003c/td\\u003e\\n\",\n        \"  \\u003ctd\\u003e\\n\",\n        \"    \\u003ca target=\\\"_blank\\\" href=\\\"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\\\"\\u003e\\u003cimg src=\\\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\\\" /\\u003eView on GitHub\\u003c/a\\u003e\\n\",\n        \"  \\u003c/td\\u003e\\n\",\n        \"  \\u003ctd\\u003e\\n\",\n        \"    \\u003ca href=\\\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\\\"\\u003e\\u003cimg src=\\\"https://www.tensorflow.org/images/download_logo_32px.png\\\" /\\u003eDownload notebook\\u003c/a\\u003e\\n\",\n        \"  \\u003c/td\\u003e\\n\",\n        \"  \\u003ctd\\u003e\\n\",\n        \"    \\u003ca href=\\\"https://tfhub.dev/google/random-nnlm-en-dim128/1\\\"\\u003e\\u003cimg src=\\\"https://www.tensorflow.org/images/hub_logo_32px.png\\\" /\\u003eSee TF Hub model\\u003c/a\\u003e\\n\",\n        \"  \\u003c/td\\u003e\\n\",\n        \"\\u003c/table\\u003e\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"lOKe4l_TSoKy\"\n      },\n      \"source\": [\n        \"\\u003e Warning: Estimators are deprecated (not recommended for new code).  Estimators run `v1.Session`-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our [compatibility guarantees](https://tensorflow.org/guide/versions), but will receive no fixes other than security vulnerabilities. See the [migration guide](https://tensorflow.org/guide/migrate) for details.\\n\",\n        \"\\n\",\n        \"\\u003c!--\\n\",\n        \"TODO(b/192933099): update this to use keras instead of estimators.\\n\",\n        \"--\\u003e\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"oZWUeUxjlMjQ\"\n      },\n      \"source\": [\n        \"## COMPAS Dataset\\n\",\n        \"[COMPAS](https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis) (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole. \\n\",\n        \"\\n\",\n        \"In 2016, [an article published in ProPublica](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) found that the COMPAS model was  incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant **would not** commit another crime) and a positive example (defendant **would** commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature \\u003csup\\u003e1, 2, 3\\u003c/sup\\u003e, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This [tutorial from the FAT* 2018 conference](https://youtu.be/hEThGT-_5ho?t=1) illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world. \\n\",\n        \"\\n\",\n        \"It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “[Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System](https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/).” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.\\n\",\n        \"\\n\",\n        \"We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature. \\n\",\n        \"\\n\",\n        \"## About the Tools in this Case Study\\n\",\n        \"*   **[TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx)** is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.\\n\",\n        \"\\n\",\n        \"*   **[TensorFlow Model Analysis](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic)** is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.\\n\",\n        \"\\n\",\n        \"*   **[Fairness Indicators](https://tensorflow.github.io/fairness-indicators)** is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.\\n\",\n        \"\\n\",\n        \"*   **[ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd)** is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.\\n\",\n        \"\\n\",\n        \"*   **[TensorFlow Data Validation](https://www.tensorflow.org/tfx/guide/tfdv)** is a library to analyze your data and check for errors that can affect model training or serving.\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"## Case Study Overview\\n\",\n        \"\\n\",\n        \"For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.\\n\",\n        \"\\n\",\n        \"The walk through of the case study will proceed as follows:\\n\",\n        \"\\n\",\n        \"1.   Download the data, preprocess, and explore the initial dataset.\\n\",\n        \"2.   Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.\\n\",\n        \"3.   Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.\\n\",\n        \"4.   Use ML Metadata to track all the artifacts for a model that we trained with TFX.\\n\",\n        \"5.   Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.\\n\",\n        \"6.   Review the performance changes within the new dataset.\\n\",\n        \"7.   Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models. \\n\",\n        \"\\n\",\n        \"## Helpful Resources\\n\",\n        \"This case study is an extension of the below case studies. It is recommended working through the below case studies first. \\n\",\n        \"*    [TFX Pipeline Overview](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb)\\n\",\n        \"*    [Fairness Indicator Case Study](https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_Example_Colab.ipynb)\\n\",\n        \"*    [TFX Data Validation](https://github.com/tensorflow/tfx/blob/master/tfx/examples/airflow_workshop/notebooks/step3.ipynb)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"## Setup\\n\",\n        \"To start, we will install the necessary packages, download the data, and import the required modules for the case study.\\n\",\n        \"\\n\",\n        \"To install the required packages for this case study in your notebook run the below PIP command.\\n\",\n        \"\\n\",\n        \"**Note:** See [here](https://github.com/tensorflow/tfx#compatible-versions) for a reference on compatibility between different versions of the libraries used in this case study.\\n\",\n        \"\\n\",\n        \"___\\n\",\n        \"\\n\",\n        \"1.  Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.\\n\",\n        \"\\n\",\n        \"2.  Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.\\n\",\n        \"\\n\",\n        \"3.  Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.\\n\",\n        \"\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"cellView\": \"both\",\n        \"id\": \"42BmC-ctlMjR\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"!python -m pip install -q -U \\\\\\n\",\n        \"  tfx \\\\\\n\",\n        \"  tensorflow-model-analysis \\\\\\n\",\n        \"  tensorflow-data-validation \\\\\\n\",\n        \"  tensorflow-metadata \\\\\\n\",\n        \"  tensorflow-transform \\\\\\n\",\n        \"  ml-metadata \\\\\\n\",\n        \"  tfx-bsl\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"yeS4Xy2MlMjW\",\n        \"scrolled\": true\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"import os\\n\",\n        \"import tempfile\\n\",\n        \"import six.moves.urllib as urllib\\n\",\n        \"\\n\",\n        \"from ml_metadata.metadata_store import metadata_store\\n\",\n        \"from ml_metadata.proto import metadata_store_pb2\\n\",\n        \"\\n\",\n        \"import pandas as pd\\n\",\n        \"from google.protobuf import text_format\\n\",\n        \"from sklearn.utils import shuffle\\n\",\n        \"import tensorflow as tf\\n\",\n        \"import tensorflow_data_validation as tfdv\\n\",\n        \"\\n\",\n        \"import tensorflow_model_analysis as tfma\\n\",\n        \"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\\n\",\n        \"from tensorflow_model_analysis.addons.fairness.view import widget_view\\n\",\n        \"\\n\",\n        \"import tfx\\n\",\n        \"from tfx.components.evaluator.component import Evaluator\\n\",\n        \"from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\\n\",\n        \"from tfx.components.schema_gen.component import SchemaGen\\n\",\n        \"from tfx.components.statistics_gen.component import StatisticsGen\\n\",\n        \"from tfx.components.trainer.component import Trainer\\n\",\n        \"from tfx.components.transform.component import Transform\\n\",\n        \"from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\\n\",\n        \"from tfx.proto import evaluator_pb2\\n\",\n        \"from tfx.proto import trainer_pb2\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"YZQLS05WlMjV\"\n      },\n      \"source\": [\n        \"## Download and preprocess the dataset\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"7uOVs7WJlMjl\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Download the COMPAS dataset and setup the required filepaths.\\n\",\n        \"_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')\\n\",\n        \"_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'\\n\",\n        \"_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')\\n\",\n        \"\\n\",\n        \"data = urllib.request.urlopen(_DATA_PATH)\\n\",\n        \"_COMPAS_DF = pd.read_csv(data)\\n\",\n        \"\\n\",\n        \"# To simpliy the case study, we will only use the columns that will be used for\\n\",\n        \"# our model.\\n\",\n        \"_COLUMN_NAMES = [\\n\",\n        \"  'age',\\n\",\n        \"  'c_charge_desc',\\n\",\n        \"  'c_charge_degree',\\n\",\n        \"  'c_days_from_compas',\\n\",\n        \"  'is_recid',\\n\",\n        \"  'juv_fel_count',\\n\",\n        \"  'juv_misd_count',\\n\",\n        \"  'juv_other_count',\\n\",\n        \"  'priors_count',\\n\",\n        \"  'r_days_from_arrest',\\n\",\n        \"  'race',\\n\",\n        \"  'sex',\\n\",\n        \"  'vr_charge_desc',\\n\",\n        \"]\\n\",\n        \"_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]\\n\",\n        \"\\n\",\n        \"# We will use 'is_recid' as our ground truth lable, which is boolean value\\n\",\n        \"# indicating if a defendant committed another crime. There are some rows with -1\\n\",\n        \"# indicating that there is no data. These rows we will drop from training.\\n\",\n        \"_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]\\n\",\n        \"\\n\",\n        \"# Given the distribution between races in this dataset we will only focuse on\\n\",\n        \"# recidivism for African-Americans and Caucasians.\\n\",\n        \"_COMPAS_DF = _COMPAS_DF[\\n\",\n        \"  _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]\\n\",\n        \"\\n\",\n        \"# Adding we weight feature that will be used during the second part of this\\n\",\n        \"# case study to help improve fairness concerns.\\n\",\n        \"_COMPAS_DF['sample_weight'] = 0.8\\n\",\n        \"\\n\",\n        \"# Load the DataFrame back to a CSV file for our TFX model.\\n\",\n        \"_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"JyCQbe5RlMjn\"\n      },\n      \"source\": [\n        \"## Building a TFX Pipeline\\n\",\n        \"\\n\",\n        \"---\\n\",\n        \"There are several [TFX Pipeline Components](https://www.tensorflow.org/tfx/guide#tfx_pipeline_components) that can be used for a production model, but for the purpose the this case study will focus on using only the below components: \\n\",\n        \"*   **ExampleGen** to read our dataset.\\n\",\n        \"*   **StatisticsGen** to calculate the statistics of our dataset.\\n\",\n        \"*   **SchemaGen** to create a data schema.\\n\",\n        \"*   **Transform** for feature engineering.\\n\",\n        \"*   **Trainer** to run our machine learning model.\\n\",\n        \"\\n\",\n        \"## Create the InteractiveContext\\n\",\n        \"\\n\",\n        \"To run TFX within a notebook, we first will need to create an `InteractiveContext` to run the components interactively. \\n\",\n        \"\\n\",\n        \"`InteractiveContext` will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties `pipeline_root` and `metadata_connection_config` may be passed to `InteractiveContext`.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"XVMS3Dz7xk8M\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"context = InteractiveContext()\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"NxAOGNCelMjq\"\n      },\n      \"source\": [\n        \"### TFX ExampleGen Component\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"0hzCIDdblMjr\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.\\n\",\n        \"# It consumes external files/services to generate Examples which will be read by\\n\",\n        \"# other TFX components. It also provides consistent and configurable partition,\\n\",\n        \"# and shuffles the dataset for ML best practice.\\n\",\n        \"\\n\",\n        \"example_gen = CsvExampleGen(input_base=_DATA_ROOT)\\n\",\n        \"context.run(example_gen)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"SW23fvThlMjz\"\n      },\n      \"source\": [\n        \"### TFX StatisticsGen Component\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"28D_qP3IlMj0\",\n        \"scrolled\": false\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# The StatisticsGen TFX pipeline component generates features statistics over\\n\",\n        \"# both training and serving data, which can be used by other pipeline\\n\",\n        \"# components. StatisticsGen uses Beam to scale to large datasets.\\n\",\n        \"\\n\",\n        \"statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\\n\",\n        \"context.run(statistics_gen)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"a72E7hT5lMj9\"\n      },\n      \"source\": [\n        \"### TFX SchemaGen Component\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"dkfTgKCBlMj9\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Some TFX components use a description of your input data called a schema. The\\n\",\n        \"# schema is an instance of schema.proto. It can specify data types for feature\\n\",\n        \"# values, whether a feature has to be present in all examples, allowed value\\n\",\n        \"# ranges, and other properties. A SchemaGen pipeline component will\\n\",\n        \"# automatically generate a schema by inferring types, categories, and ranges\\n\",\n        \"# from the training data.\\n\",\n        \"\\n\",\n        \"infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])\\n\",\n        \"context.run(infer_schema)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"43z_COkolMkI\"\n      },\n      \"source\": [\n        \"### TFX Transform Component\\n\",\n        \"\\n\",\n        \"The `Transform` component performs data transformations and feature engineering.  The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference.  This graph becomes part of the SavedModel that is the result of model training.  Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.\\n\",\n        \"\\n\",\n        \"The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.\\n\",\n        \"\\n\",\n        \"Define some constants and functions for both the `Transform` component and the `Trainer` component.  Define them in a Python module, in this case saved to disk using the `%%writefile` magic command since you are working in a notebook.\\n\",\n        \"\\n\",\n        \"The transformation that we will be performing in this case study are as follows:\\n\",\n        \"*   For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.\\n\",\n        \"*   For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.\\n\",\n        \"*   Remove empty row values and replace them with an empty string or 0 depending on the feature type.\\n\",\n        \"*   Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"Now let's define a module containing the `preprocessing_fn()` function that we will pass to the `Transform` component:\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"83MZZqUQlMkJ\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Setup paths for the Transform Component.\\n\",\n        \"_transform_module_file = 'compas_transform.py'\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"NLzxWiOBlMkL\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%writefile {_transform_module_file}\\n\",\n        \"import tensorflow as tf\\n\",\n        \"import tensorflow_transform as tft\\n\",\n        \"\\n\",\n        \"CATEGORICAL_FEATURE_KEYS = [\\n\",\n        \"    'sex',\\n\",\n        \"    'race',\\n\",\n        \"    'c_charge_desc',\\n\",\n        \"    'c_charge_degree',\\n\",\n        \"]\\n\",\n        \"\\n\",\n        \"INT_FEATURE_KEYS = [\\n\",\n        \"    'age',\\n\",\n        \"    'c_days_from_compas',\\n\",\n        \"    'juv_fel_count',\\n\",\n        \"    'juv_misd_count',\\n\",\n        \"    'juv_other_count',\\n\",\n        \"    'priors_count',\\n\",\n        \"    'sample_weight',\\n\",\n        \"]\\n\",\n        \"\\n\",\n        \"LABEL_KEY = 'is_recid'\\n\",\n        \"\\n\",\n        \"# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.\\n\",\n        \"MAX_CATEGORICAL_FEATURE_VALUES = [\\n\",\n        \"    2,\\n\",\n        \"    6,\\n\",\n        \"    513,\\n\",\n        \"    14,\\n\",\n        \"]\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def transformed_name(key):\\n\",\n        \"  return '{}_xf'.format(key)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def preprocessing_fn(inputs):\\n\",\n        \"  \\\"\\\"\\\"tf.transform's callback function for preprocessing inputs.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    inputs: Map from feature keys to raw features.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    Map from string feature key to transformed feature operations.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  outputs = {}\\n\",\n        \"  for key in CATEGORICAL_FEATURE_KEYS:\\n\",\n        \"    outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(\\n\",\n        \"        _fill_in_missing(inputs[key]),\\n\",\n        \"        vocab_filename=key)\\n\",\n        \"\\n\",\n        \"  for key in INT_FEATURE_KEYS:\\n\",\n        \"    outputs[transformed_name(key)] = tft.scale_to_z_score(\\n\",\n        \"        _fill_in_missing(inputs[key]))\\n\",\n        \"\\n\",\n        \"  # Target label will be to see if the defendant is charged for another crime.\\n\",\n        \"  outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])\\n\",\n        \"  return outputs\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _fill_in_missing(tensor_value):\\n\",\n        \"  \\\"\\\"\\\"Replaces a missing values in a SparseTensor.\\n\",\n        \"\\n\",\n        \"  Fills in missing values of `tensor_value` with '' or 0, and converts to a\\n\",\n        \"  dense tensor.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size\\n\",\n        \"      at most 1 in the second dimension.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A rank 1 tensor where missing values of `tensor_value` are filled in.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  if not isinstance(tensor_value, tf.sparse.SparseTensor):\\n\",\n        \"    return tensor_value\\n\",\n        \"  default_value = '' if tensor_value.dtype == tf.string else 0\\n\",\n        \"  sparse_tensor = tf.SparseTensor(\\n\",\n        \"      tensor_value.indices,\\n\",\n        \"      tensor_value.values,\\n\",\n        \"      [tensor_value.dense_shape[0], 1])\\n\",\n        \"  dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)\\n\",\n        \"  return tf.squeeze(dense_tensor, axis=1)\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"5yzFOQrPlMkM\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Build and run the Transform Component.\\n\",\n        \"transform = Transform(\\n\",\n        \"    examples=example_gen.outputs['examples'],\\n\",\n        \"    schema=infer_schema.outputs['schema'],\\n\",\n        \"    module_file=_transform_module_file\\n\",\n        \")\\n\",\n        \"context.run(transform)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"A_ubj158lMkP\"\n      },\n      \"source\": [\n        \"### TFX Trainer Component\\n\",\n        \"The `Trainer` Component trains a specified TensorFlow model.\\n\",\n        \"\\n\",\n        \"In order to run the trainer component we need to create a Python module containing a `trainer_fn` function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using `keras.model_to_estimator()`.\\n\",\n        \"\\n\",\n        \"The `Trainer` component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called `trainer_fn` function that TFX will call. \\n\",\n        \"\\n\",\n        \"For our case study we will build a Keras model that will return will return [`keras.model_to_estimator()`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator).\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"K9zxx6CnlMkQ\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Setup paths for the Trainer Component.\\n\",\n        \"_trainer_module_file = 'compas_trainer.py'\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"KhuwfYIRlMkR\",\n        \"scrolled\": true\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%writefile {_trainer_module_file}\\n\",\n        \"import tensorflow as tf\\n\",\n        \"\\n\",\n        \"import tensorflow_model_analysis as tfma\\n\",\n        \"import tensorflow_transform as tft\\n\",\n        \"from tensorflow_transform.tf_metadata import schema_utils\\n\",\n        \"\\n\",\n        \"from compas_transform import *\\n\",\n        \"\\n\",\n        \"_BATCH_SIZE = 1000\\n\",\n        \"_LEARNING_RATE = 0.00001\\n\",\n        \"_MAX_CHECKPOINTS = 1\\n\",\n        \"_SAVE_CHECKPOINT_STEPS = 999\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def transformed_names(keys):\\n\",\n        \"  return [transformed_name(key) for key in keys]\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def transformed_name(key):\\n\",\n        \"  return '{}_xf'.format(key)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _gzip_reader_fn(filenames):\\n\",\n        \"  \\\"\\\"\\\"Returns a record reader that can read gzip'ed files.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    filenames: A tf.string tensor or tf.data.Dataset containing one or more\\n\",\n        \"      filenames.\\n\",\n        \"\\n\",\n        \"  Returns: A nested structure of tf.TypeSpec objects matching the structure of\\n\",\n        \"    an element of this dataset and specifying the type of individual components.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"# Tf.Transform considers these features as \\\"raw\\\".\\n\",\n        \"def _get_raw_feature_spec(schema):\\n\",\n        \"  \\\"\\\"\\\"Generates a feature spec from a Schema proto.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    schema: A Schema proto.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A feature spec defined as a dict whose keys are feature names and values are\\n\",\n        \"      instances of FixedLenFeature, VarLenFeature or SparseFeature.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  return schema_utils.schema_as_feature_spec(schema).feature_spec\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _example_serving_receiver_fn(tf_transform_output, schema):\\n\",\n        \"  \\\"\\\"\\\"Builds the serving in inputs.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    schema: the schema of the input data.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    TensorFlow graph which parses examples, applying tf-transform to them.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  raw_feature_spec = _get_raw_feature_spec(schema)\\n\",\n        \"  raw_feature_spec.pop(LABEL_KEY)\\n\",\n        \"\\n\",\n        \"  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\\n\",\n        \"      raw_feature_spec)\\n\",\n        \"  serving_input_receiver = raw_input_fn()\\n\",\n        \"\\n\",\n        \"  transformed_features = tf_transform_output.transform_raw_features(\\n\",\n        \"      serving_input_receiver.features)\\n\",\n        \"  transformed_features.pop(transformed_name(LABEL_KEY))\\n\",\n        \"  return tf.estimator.export.ServingInputReceiver(\\n\",\n        \"      transformed_features, serving_input_receiver.receiver_tensors)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _eval_input_receiver_fn(tf_transform_output, schema):\\n\",\n        \"  \\\"\\\"\\\"Builds everything needed for the tf-model-analysis to run the model.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    schema: the schema of the input data.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    EvalInputReceiver function, which contains:\\n\",\n        \"      - TensorFlow graph which parses raw untransformed features, applies the\\n\",\n        \"          tf-transform preprocessing operators.\\n\",\n        \"      - Set of raw, untransformed features.\\n\",\n        \"      - Label against which predictions will be compared.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  # Notice that the inputs are raw features, not transformed features here.\\n\",\n        \"  raw_feature_spec = _get_raw_feature_spec(schema)\\n\",\n        \"\\n\",\n        \"  serialized_tf_example = tf.compat.v1.placeholder(\\n\",\n        \"      dtype=tf.string, shape=[None], name='input_example_tensor')\\n\",\n        \"\\n\",\n        \"  # Add a parse_example operator to the tensorflow graph, which will parse\\n\",\n        \"  # raw, untransformed, tf examples.\\n\",\n        \"  features = tf.io.parse_example(\\n\",\n        \"      serialized=serialized_tf_example, features=raw_feature_spec)\\n\",\n        \"\\n\",\n        \"  transformed_features = tf_transform_output.transform_raw_features(features)\\n\",\n        \"  labels = transformed_features.pop(transformed_name(LABEL_KEY))\\n\",\n        \"\\n\",\n        \"  receiver_tensors = {'examples': serialized_tf_example}\\n\",\n        \"\\n\",\n        \"  return tfma.export.EvalInputReceiver(\\n\",\n        \"      features=transformed_features,\\n\",\n        \"      receiver_tensors=receiver_tensors,\\n\",\n        \"      labels=labels)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _input_fn(filenames, tf_transform_output, batch_size=200):\\n\",\n        \"  \\\"\\\"\\\"Generates features and labels for training or evaluation.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    filenames: List of CSV files to read data from.\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    batch_size: First dimension size of the Tensors returned by input_fn.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A (features, indices) tuple where features is a dictionary of\\n\",\n        \"      Tensors, and indices is a single Tensor of label indices.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  transformed_feature_spec = (\\n\",\n        \"      tf_transform_output.transformed_feature_spec().copy())\\n\",\n        \"\\n\",\n        \"  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(\\n\",\n        \"      filenames,\\n\",\n        \"      batch_size,\\n\",\n        \"      transformed_feature_spec,\\n\",\n        \"      shuffle=False,\\n\",\n        \"      reader=_gzip_reader_fn)\\n\",\n        \"\\n\",\n        \"  transformed_features = dataset.make_one_shot_iterator().get_next()\\n\",\n        \"\\n\",\n        \"  # We pop the label because we do not want to use it as a feature while we're\\n\",\n        \"  # training.\\n\",\n        \"  return transformed_features, transformed_features.pop(\\n\",\n        \"      transformed_name(LABEL_KEY))\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _keras_model_builder():\\n\",\n        \"  \\\"\\\"\\\"Build a keras model for COMPAS dataset classification.\\n\",\n        \"  \\n\",\n        \"  Returns:\\n\",\n        \"    A compiled Keras model.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  feature_columns = []\\n\",\n        \"  feature_layer_inputs = {}\\n\",\n        \"\\n\",\n        \"  for key in transformed_names(INT_FEATURE_KEYS):\\n\",\n        \"    feature_columns.append(tf.feature_column.numeric_column(key))\\n\",\n        \"    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)\\n\",\n        \"\\n\",\n        \"  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),\\n\",\n        \"                              MAX_CATEGORICAL_FEATURE_VALUES):\\n\",\n        \"    feature_columns.append(\\n\",\n        \"        tf.feature_column.indicator_column(\\n\",\n        \"            tf.feature_column.categorical_column_with_identity(\\n\",\n        \"                key, num_buckets=num_buckets)))\\n\",\n        \"    feature_layer_inputs[key] = tf.keras.Input(\\n\",\n        \"        shape=(1,), name=key, dtype=tf.dtypes.int32)\\n\",\n        \"\\n\",\n        \"  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)\\n\",\n        \"  feature_layer_outputs = feature_columns_input(feature_layer_inputs)\\n\",\n        \"\\n\",\n        \"  dense_layers = tf.keras.layers.Dense(\\n\",\n        \"      20, activation='relu', name='dense_1')(feature_layer_outputs)\\n\",\n        \"  dense_layers = tf.keras.layers.Dense(\\n\",\n        \"      10, activation='relu', name='dense_2')(dense_layers)\\n\",\n        \"  output = tf.keras.layers.Dense(\\n\",\n        \"      1, name='predictions')(dense_layers)\\n\",\n        \"\\n\",\n        \"  model = tf.keras.Model(\\n\",\n        \"      inputs=[v for v in feature_layer_inputs.values()], outputs=output)\\n\",\n        \"\\n\",\n        \"  model.compile(\\n\",\n        \"      loss=tf.keras.losses.MeanAbsoluteError(),\\n\",\n        \"      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))\\n\",\n        \"\\n\",\n        \"  return model\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"# TFX will call this function.\\n\",\n        \"def trainer_fn(hparams, schema):\\n\",\n        \"  \\\"\\\"\\\"Build the estimator using the high level API.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    hparams: Hyperparameters used to train the model as name/value pairs.\\n\",\n        \"    schema: Holds the schema of the training examples.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A dict of the following:\\n\",\n        \"      - estimator: The estimator that will be used for training and eval.\\n\",\n        \"      - train_spec: Spec for training.\\n\",\n        \"      - eval_spec: Spec for eval.\\n\",\n        \"      - eval_input_receiver_fn: Input function for eval.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)\\n\",\n        \"\\n\",\n        \"  train_input_fn = lambda: _input_fn(\\n\",\n        \"      hparams.train_files,\\n\",\n        \"      tf_transform_output,\\n\",\n        \"      batch_size=_BATCH_SIZE)\\n\",\n        \"\\n\",\n        \"  eval_input_fn = lambda: _input_fn(\\n\",\n        \"      hparams.eval_files,\\n\",\n        \"      tf_transform_output,\\n\",\n        \"      batch_size=_BATCH_SIZE)\\n\",\n        \"\\n\",\n        \"  train_spec = tf.estimator.TrainSpec(\\n\",\n        \"      train_input_fn,\\n\",\n        \"      max_steps=hparams.train_steps)\\n\",\n        \"\\n\",\n        \"  serving_receiver_fn = lambda: _example_serving_receiver_fn(\\n\",\n        \"      tf_transform_output, schema)\\n\",\n        \"\\n\",\n        \"  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)\\n\",\n        \"  eval_spec = tf.estimator.EvalSpec(\\n\",\n        \"      eval_input_fn,\\n\",\n        \"      steps=hparams.eval_steps,\\n\",\n        \"      exporters=[exporter],\\n\",\n        \"      name='compas-eval')\\n\",\n        \"\\n\",\n        \"  run_config = tf.estimator.RunConfig(\\n\",\n        \"      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,\\n\",\n        \"      keep_checkpoint_max=_MAX_CHECKPOINTS)\\n\",\n        \"\\n\",\n        \"  run_config = run_config.replace(model_dir=hparams.serving_model_dir)\\n\",\n        \"\\n\",\n        \"  estimator = tf.keras.estimator.model_to_estimator(\\n\",\n        \"      keras_model=_keras_model_builder(), config=run_config)\\n\",\n        \"\\n\",\n        \"  # Create an input receiver for TFMA processing.\\n\",\n        \"  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)\\n\",\n        \"\\n\",\n        \"  return {\\n\",\n        \"      'estimator': estimator,\\n\",\n        \"      'train_spec': train_spec,\\n\",\n        \"      'eval_spec': eval_spec,\\n\",\n        \"      'eval_input_receiver_fn': receiver_fn\\n\",\n        \"  }\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"oiC1wABllMkU\",\n        \"scrolled\": false\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Uses user-provided Python function that implements a model using TensorFlow's\\n\",\n        \"# Estimators API.\\n\",\n        \"trainer = Trainer(\\n\",\n        \"    module_file=_trainer_module_file,\\n\",\n        \"    transformed_examples=transform.outputs['transformed_examples'],\\n\",\n        \"    schema=infer_schema.outputs['schema'],\\n\",\n        \"    transform_graph=transform.outputs['transform_graph'],\\n\",\n        \"    train_args=trainer_pb2.TrainArgs(num_steps=10000),\\n\",\n        \"    eval_args=trainer_pb2.EvalArgs(num_steps=5000)\\n\",\n        \")\\n\",\n        \"context.run(trainer)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"0tfnGpl2lMkv\"\n      },\n      \"source\": [\n        \"## TensorFlow Model Analysis\\n\",\n        \"\\n\",\n        \"Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.\\n\",\n        \"\\n\",\n        \"First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.\\n\",\n        \"\\n\",\n        \"For a list of possible metrics that can be added into TensorFlow Model Analysis see [here](https://github.com/tensorflow/model-analysis/blob/master/g3doc/metrics.md).\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"i8VdZ4z3lMk0\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Uses TensorFlow Model Analysis to compute a evaluation statistics over\\n\",\n        \"# features of a model.\\n\",\n        \"model_analyzer = Evaluator(\\n\",\n        \"    examples=example_gen.outputs['examples'],\\n\",\n        \"    model=trainer.outputs['model'],\\n\",\n        \"\\n\",\n        \"    eval_config = text_format.Parse(\\\"\\\"\\\"\\n\",\n        \"      model_specs {\\n\",\n        \"        label_key: 'is_recid'\\n\",\n        \"      }\\n\",\n        \"      metrics_specs {\\n\",\n        \"        metrics {class_name: \\\"BinaryAccuracy\\\"}\\n\",\n        \"        metrics {class_name: \\\"AUC\\\"}\\n\",\n        \"        metrics {\\n\",\n        \"          class_name: \\\"FairnessIndicators\\\"\\n\",\n        \"          config: '{\\\"thresholds\\\": [0.25, 0.5, 0.75]}'\\n\",\n        \"        }\\n\",\n        \"      }\\n\",\n        \"      slicing_specs {\\n\",\n        \"        feature_keys: 'race'\\n\",\n        \"      }\\n\",\n        \"    \\\"\\\"\\\", tfma.EvalConfig())\\n\",\n        \")\\n\",\n        \"context.run(model_analyzer)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"gXGxPEAnBkUM\"\n      },\n      \"source\": [\n        \"## Fairness Indicators\\n\",\n        \"\\n\",\n        \"Load Fairness Indicators to examine the underlying data.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"4ZgUtH_OBg2x\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri\\n\",\n        \"eval_result = tfma.load_eval_result(evaluation_uri)\\n\",\n        \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"igoChEEblMk4\"\n      },\n      \"source\": [\n        \"Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.\\n\",\n        \"\\n\",\n        \"We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the [False Positive Rate](https://en.wikipedia.org/wiki/Receiver_operating_characteristic).\\n\",\n        \"\\n\",\n        \"![Type I and Type II errors](http://services.google.com/fh/gumdrop/preview/blogs/type_i_type_ii.png)\\n\",\n        \"\\n\",\n        \"Within Fairness Indicator tool you'll see two dropdowns options:\\n\",\n        \"1.   A \\\"Baseline\\\" option that is set by `column_for_slicing`.\\n\",\n        \"2.   A \\\"Thresholds\\\" option that is set by `fairness_indicator_thresholds`.\\n\",\n        \"\\n\",\n        \"“Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well. \\n\",\n        \"\\n\",\n        \"\\\"Threshold\\\" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.\\n\",\n        \"\\n\",\n        \"1.   Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants *will* commit another crime when they actually *don't*.\\n\",\n        \"2.   Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants *will not* commit another crime when they actually *do*.\\n\",\n        \"\\n\",\n        \"We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.\\n\",\n        \"\\n\",\n        \"The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold. \\n\",\n        \"\\n\",\n        \"* **False Positive Rate @ 0.75**\\n\",\n        \"  * **African-American:** ~30%\\n\",\n        \"     * AUC: 0.71\\n\",\n        \"     * Binary Accuracy: 0.67\\n\",\n        \"  * **Caucasian:** ~8%\\n\",\n        \"     * AUC: 0.71\\n\",\n        \"     * AUC: 0.67\\n\",\n        \"\\n\",\n        \"More information on Type I/II errors and threshold setting can be found [here](https://developers.google.com/machine-learning/crash-course/classification/thresholding).\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"Mpbs4x9dB2PA\"\n      },\n      \"source\": [\n        \"## ML Metadata\\n\",\n        \"\\n\",\n        \"To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.\\n\",\n        \"\\n\",\n        \"For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.\\n\",\n        \"\\n\",\n        \"We will start by first laying out the high level artifacts, execution, and context types in our model.\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"0wjiFKOxlMkn\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Connect to the TFX database.\\n\",\n        \"connection_config = metadata_store_pb2.ConnectionConfig()\\n\",\n        \"\\n\",\n        \"connection_config.sqlite.filename_uri = os.path.join(\\n\",\n        \"  context.pipeline_root, 'metadata.sqlite')\\n\",\n        \"store = metadata_store.MetadataStore(connection_config)\\n\",\n        \"\\n\",\n        \"def _mlmd_type_to_dataframe(mlmd_type):\\n\",\n        \"  \\\"\\\"\\\"Helper function to turn MLMD into a Pandas DataFrame.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    mlmd_type: Metadata store type.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    DataFrame containing type ID, Name, and Properties.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  pd.set_option('display.max_columns', None)  \\n\",\n        \"  pd.set_option('display.expand_frame_repr', False)\\n\",\n        \"\\n\",\n        \"  column_names = ['ID', 'Name', 'Properties']\\n\",\n        \"  df = pd.DataFrame(columns=column_names)\\n\",\n        \"  for a_type in mlmd_type:\\n\",\n        \"    mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],\\n\",\n        \"                            columns=column_names)\\n\",\n        \"    df = df.append(mlmd_row)\\n\",\n        \"  return df\\n\",\n        \"\\n\",\n        \"# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.\\n\",\n        \"# First, we can use type APIs to understand what is defined in ML Metadata\\n\",\n        \"# by the current version of TFX. We'll be able to view all the previous runs\\n\",\n        \"# that created our initial model.\\n\",\n        \"print('Artifact Types:')\\n\",\n        \"display(_mlmd_type_to_dataframe(store.get_artifact_types()))\\n\",\n        \"\\n\",\n        \"print('\\\\nExecution Types:')\\n\",\n        \"display(_mlmd_type_to_dataframe(store.get_execution_types()))\\n\",\n        \"\\n\",\n        \"print('\\\\nContext Types:')\\n\",\n        \"display(_mlmd_type_to_dataframe(store.get_context_types()))\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"lJQoer33ZEXD\"\n      },\n      \"source\": [\n        \"## Identify where the fairness issue could be coming from\\n\",\n        \"\\n\",\n        \"For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.\\n\",\n        \"\\n\",\n        \"We'll start by diving into the `StatisticsGen` to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.\\n\",\n        \"\\n\",\n        \"After running the below cell, select `Lift (Y=1)` in the second chart on the `Chart to show` tab to see the [lift](https://en.wikipedia.org/wiki/Lift_(data_mining)) between the different data slices. Within `race`, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"xvcw9KL0byeY\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"statistics_gen = StatisticsGen(\\n\",\n        \"    examples=example_gen.outputs['examples'],\\n\",\n        \"    schema=infer_schema.outputs['schema'],\\n\",\n        \"    stats_options=tfdv.StatsOptions(label_feature='is_recid'))\\n\",\n        \"exec_result = context.run(statistics_gen)\\n\",\n        \"\\n\",\n        \"for event in store.get_events_by_execution_ids([exec_result.execution_id]):\\n\",\n        \"  if event.path.steps[0].key == 'statistics':\\n\",\n        \"    statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri\\n\",\n        \"\\n\",\n        \"model_stats = tfdv.load_statistics(\\n\",\n        \"    os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))\\n\",\n        \"tfdv.visualize_statistics(model_stats)\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"ofWXz48zzlGT\"\n      },\n      \"source\": [\n        \"## Tracking a Model Change\\n\",\n        \"\\n\",\n        \"Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.\\n\",\n        \"\\n\",\n        \"ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"GCQ-7kzMRbXM\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'\\n\",\n        \"\\n\",\n        \"first_trained_model = store.get_artifacts_by_type('Model')[-1]\\n\",\n        \"\\n\",\n        \"# Add the two notes above to the ML metadata.\\n\",\n        \"first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD\\n\",\n        \"store.put_artifacts([first_trained_model])\\n\",\n        \"\\n\",\n        \"def _mlmd_model_to_dataframe(model, model_number):\\n\",\n        \"  \\\"\\\"\\\"Helper function to turn a MLMD modle into a Pandas DataFrame.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    model: Metadata store model.\\n\",\n        \"    model_number: Number of model run within ML Metadata.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    DataFrame containing the ML Metadata model.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  pd.set_option('display.max_columns', None)  \\n\",\n        \"  pd.set_option('display.expand_frame_repr', False)\\n\",\n        \"\\n\",\n        \"  df = pd.DataFrame()\\n\",\n        \"  custom_properties = ['name', 'note', 'state', 'producer_component',\\n\",\n        \"                       'pipeline_name']\\n\",\n        \"  df['id'] = [model[model_number].id]\\n\",\n        \"  df['uri'] = [model[model_number].uri]\\n\",\n        \"  for prop in custom_properties:\\n\",\n        \"    df[prop] = model[model_number].custom_properties.get(prop)\\n\",\n        \"    df[prop] = df[prop].astype(str).map(\\n\",\n        \"        lambda x: x.lstrip('string_value: \\\"').rstrip('\\\"\\\\n'))\\n\",\n        \"  return df\\n\",\n        \"\\n\",\n        \"# Print the current model to see the results of the ML Metadata for the model.\\n\",\n        \"display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"-gwiNtcoeO8S\"\n      },\n      \"source\": [\n        \"## Improving fairness concerns by weighting the model\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques\\u003csup\\u003e1\\u003c/sup\\u003e that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.\\n\",\n        \"\\n\",\n        \"The code below is the same as the above Transform Component but with the exception of a new class called `LogisticEndpoint` that we will use for our loss within Keras and a few parameter changes.\\n\",\n        \"\\n\",\n        \"___\\n\",\n        \"\\n\",\n        \"1.  Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"yzLWm3-1Zjvv\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%writefile {_trainer_module_file}\\n\",\n        \"import numpy as np\\n\",\n        \"import tensorflow as tf\\n\",\n        \"\\n\",\n        \"import tensorflow_model_analysis as tfma\\n\",\n        \"import tensorflow_transform as tft\\n\",\n        \"from tensorflow_transform.tf_metadata import schema_utils\\n\",\n        \"\\n\",\n        \"from compas_transform import *\\n\",\n        \"\\n\",\n        \"_BATCH_SIZE = 1000\\n\",\n        \"_LEARNING_RATE = 0.00001\\n\",\n        \"_MAX_CHECKPOINTS = 1\\n\",\n        \"_SAVE_CHECKPOINT_STEPS = 999\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def transformed_names(keys):\\n\",\n        \"  return [transformed_name(key) for key in keys]\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def transformed_name(key):\\n\",\n        \"  return '{}_xf'.format(key)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _gzip_reader_fn(filenames):\\n\",\n        \"  \\\"\\\"\\\"Returns a record reader that can read gzip'ed files.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    filenames: A tf.string tensor or tf.data.Dataset containing one or more\\n\",\n        \"      filenames.\\n\",\n        \"\\n\",\n        \"  Returns: A nested structure of tf.TypeSpec objects matching the structure of\\n\",\n        \"    an element of this dataset and specifying the type of individual components.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"# Tf.Transform considers these features as \\\"raw\\\".\\n\",\n        \"def _get_raw_feature_spec(schema):\\n\",\n        \"  \\\"\\\"\\\"Generates a feature spec from a Schema proto.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    schema: A Schema proto.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A feature spec defined as a dict whose keys are feature names and values are\\n\",\n        \"      instances of FixedLenFeature, VarLenFeature or SparseFeature.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  return schema_utils.schema_as_feature_spec(schema).feature_spec\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _example_serving_receiver_fn(tf_transform_output, schema):\\n\",\n        \"  \\\"\\\"\\\"Builds the serving in inputs.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    schema: the schema of the input data.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    TensorFlow graph which parses examples, applying tf-transform to them.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  raw_feature_spec = _get_raw_feature_spec(schema)\\n\",\n        \"  raw_feature_spec.pop(LABEL_KEY)\\n\",\n        \"\\n\",\n        \"  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\\n\",\n        \"      raw_feature_spec)\\n\",\n        \"  serving_input_receiver = raw_input_fn()\\n\",\n        \"\\n\",\n        \"  transformed_features = tf_transform_output.transform_raw_features(\\n\",\n        \"      serving_input_receiver.features)\\n\",\n        \"  transformed_features.pop(transformed_name(LABEL_KEY))\\n\",\n        \"  return tf.estimator.export.ServingInputReceiver(\\n\",\n        \"      transformed_features, serving_input_receiver.receiver_tensors)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _eval_input_receiver_fn(tf_transform_output, schema):\\n\",\n        \"  \\\"\\\"\\\"Builds everything needed for the tf-model-analysis to run the model.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    schema: the schema of the input data.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    EvalInputReceiver function, which contains:\\n\",\n        \"      - TensorFlow graph which parses raw untransformed features, applies the\\n\",\n        \"          tf-transform preprocessing operators.\\n\",\n        \"      - Set of raw, untransformed features.\\n\",\n        \"      - Label against which predictions will be compared.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  # Notice that the inputs are raw features, not transformed features here.\\n\",\n        \"  raw_feature_spec = _get_raw_feature_spec(schema)\\n\",\n        \"\\n\",\n        \"  serialized_tf_example = tf.compat.v1.placeholder(\\n\",\n        \"      dtype=tf.string, shape=[None], name='input_example_tensor')\\n\",\n        \"\\n\",\n        \"  # Add a parse_example operator to the tensorflow graph, which will parse\\n\",\n        \"  # raw, untransformed, tf examples.\\n\",\n        \"  features = tf.io.parse_example(\\n\",\n        \"      serialized=serialized_tf_example, features=raw_feature_spec)\\n\",\n        \"\\n\",\n        \"  transformed_features = tf_transform_output.transform_raw_features(features)\\n\",\n        \"  labels = transformed_features.pop(transformed_name(LABEL_KEY))\\n\",\n        \"\\n\",\n        \"  receiver_tensors = {'examples': serialized_tf_example}\\n\",\n        \"\\n\",\n        \"  return tfma.export.EvalInputReceiver(\\n\",\n        \"      features=transformed_features,\\n\",\n        \"      receiver_tensors=receiver_tensors,\\n\",\n        \"      labels=labels)\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _input_fn(filenames, tf_transform_output, batch_size=200):\\n\",\n        \"  \\\"\\\"\\\"Generates features and labels for training or evaluation.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    filenames: List of CSV files to read data from.\\n\",\n        \"    tf_transform_output: A TFTransformOutput.\\n\",\n        \"    batch_size: First dimension size of the Tensors returned by input_fn.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A (features, indices) tuple where features is a dictionary of\\n\",\n        \"      Tensors, and indices is a single Tensor of label indices.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  transformed_feature_spec = (\\n\",\n        \"      tf_transform_output.transformed_feature_spec().copy())\\n\",\n        \"\\n\",\n        \"  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(\\n\",\n        \"      filenames,\\n\",\n        \"      batch_size,\\n\",\n        \"      transformed_feature_spec,\\n\",\n        \"      shuffle=False,\\n\",\n        \"      reader=_gzip_reader_fn)\\n\",\n        \"\\n\",\n        \"  transformed_features = dataset.make_one_shot_iterator().get_next()\\n\",\n        \"\\n\",\n        \"  # We pop the label because we do not want to use it as a feature while we're\\n\",\n        \"  # training.\\n\",\n        \"  return transformed_features, transformed_features.pop(\\n\",\n        \"      transformed_name(LABEL_KEY))\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"# TFX will call this function.\\n\",\n        \"def trainer_fn(hparams, schema):\\n\",\n        \"  \\\"\\\"\\\"Build the estimator using the high level API.\\n\",\n        \"\\n\",\n        \"  Args:\\n\",\n        \"    hparams: Hyperparameters used to train the model as name/value pairs.\\n\",\n        \"    schema: Holds the schema of the training examples.\\n\",\n        \"\\n\",\n        \"  Returns:\\n\",\n        \"    A dict of the following:\\n\",\n        \"      - estimator: The estimator that will be used for training and eval.\\n\",\n        \"      - train_spec: Spec for training.\\n\",\n        \"      - eval_spec: Spec for eval.\\n\",\n        \"      - eval_input_receiver_fn: Input function for eval.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)\\n\",\n        \"\\n\",\n        \"  train_input_fn = lambda: _input_fn(\\n\",\n        \"      hparams.train_files,\\n\",\n        \"      tf_transform_output,\\n\",\n        \"      batch_size=_BATCH_SIZE)\\n\",\n        \"\\n\",\n        \"  eval_input_fn = lambda: _input_fn(\\n\",\n        \"      hparams.eval_files,\\n\",\n        \"      tf_transform_output,\\n\",\n        \"      batch_size=_BATCH_SIZE)\\n\",\n        \"\\n\",\n        \"  train_spec = tf.estimator.TrainSpec(\\n\",\n        \"      train_input_fn,\\n\",\n        \"      max_steps=hparams.train_steps)\\n\",\n        \"\\n\",\n        \"  serving_receiver_fn = lambda: _example_serving_receiver_fn(\\n\",\n        \"      tf_transform_output, schema)\\n\",\n        \"\\n\",\n        \"  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)\\n\",\n        \"  eval_spec = tf.estimator.EvalSpec(\\n\",\n        \"      eval_input_fn,\\n\",\n        \"      steps=hparams.eval_steps,\\n\",\n        \"      exporters=[exporter],\\n\",\n        \"      name='compas-eval')\\n\",\n        \"\\n\",\n        \"  run_config = tf.estimator.RunConfig(\\n\",\n        \"      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,\\n\",\n        \"      keep_checkpoint_max=_MAX_CHECKPOINTS)\\n\",\n        \"\\n\",\n        \"  run_config = run_config.replace(model_dir=hparams.serving_model_dir)\\n\",\n        \"\\n\",\n        \"  estimator = tf.keras.estimator.model_to_estimator(\\n\",\n        \"      keras_model=_keras_model_builder(), config=run_config)\\n\",\n        \"\\n\",\n        \"  # Create an input receiver for TFMA processing.\\n\",\n        \"  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)\\n\",\n        \"\\n\",\n        \"  return {\\n\",\n        \"      'estimator': estimator,\\n\",\n        \"      'train_spec': train_spec,\\n\",\n        \"      'eval_spec': eval_spec,\\n\",\n        \"      'eval_input_receiver_fn': receiver_fn\\n\",\n        \"  }\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"def _keras_model_builder():\\n\",\n        \"  \\\"\\\"\\\"Build a keras model for COMPAS dataset classification.\\n\",\n        \"  \\n\",\n        \"  Returns:\\n\",\n        \"    A compiled Keras model.\\n\",\n        \"  \\\"\\\"\\\"\\n\",\n        \"  feature_columns = []\\n\",\n        \"  feature_layer_inputs = {}\\n\",\n        \"\\n\",\n        \"  for key in transformed_names(INT_FEATURE_KEYS):\\n\",\n        \"    feature_columns.append(tf.feature_column.numeric_column(key))\\n\",\n        \"    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)\\n\",\n        \"\\n\",\n        \"  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),\\n\",\n        \"                              MAX_CATEGORICAL_FEATURE_VALUES):\\n\",\n        \"    feature_columns.append(\\n\",\n        \"        tf.feature_column.indicator_column(\\n\",\n        \"            tf.feature_column.categorical_column_with_identity(\\n\",\n        \"                key, num_buckets=num_buckets)))\\n\",\n        \"    feature_layer_inputs[key] = tf.keras.Input(\\n\",\n        \"        shape=(1,), name=key, dtype=tf.dtypes.int32)\\n\",\n        \"\\n\",\n        \"  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)\\n\",\n        \"  feature_layer_outputs = feature_columns_input(feature_layer_inputs)\\n\",\n        \"\\n\",\n        \"  dense_layers = tf.keras.layers.Dense(\\n\",\n        \"      20, activation='relu', name='dense_1')(feature_layer_outputs)\\n\",\n        \"  dense_layers = tf.keras.layers.Dense(\\n\",\n        \"      10, activation='relu', name='dense_2')(dense_layers)\\n\",\n        \"  output = tf.keras.layers.Dense(\\n\",\n        \"      1, name='predictions')(dense_layers)\\n\",\n        \"\\n\",\n        \"  model = tf.keras.Model(\\n\",\n        \"      inputs=[v for v in feature_layer_inputs.values()], outputs=output)\\n\",\n        \"\\n\",\n        \"  # To weight our model we will develop a custom loss class within Keras.\\n\",\n        \"  # The old loss is commented out below and the new one is added in below.\\n\",\n        \"  model.compile(\\n\",\n        \"      # loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\\n\",\n        \"      loss=LogisticEndpoint(),\\n\",\n        \"      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))\\n\",\n        \"\\n\",\n        \"  return model\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"class LogisticEndpoint(tf.keras.layers.Layer):\\n\",\n        \"\\n\",\n        \"  def __init__(self, name=None):\\n\",\n        \"    super(LogisticEndpoint, self).__init__(name=name)\\n\",\n        \"    self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)\\n\",\n        \"\\n\",\n        \"  def __call__(self, y_true, y_pred, sample_weight=None):\\n\",\n        \"    inputs = [y_true, y_pred]\\n\",\n        \"    inputs += sample_weight or ['sample_weight_xf']\\n\",\n        \"    return super(LogisticEndpoint, self).__call__(inputs)\\n\",\n        \"\\n\",\n        \"  def call(self, inputs):\\n\",\n        \"    y_true, y_pred = inputs[0], inputs[1]\\n\",\n        \"    if len(inputs) == 3:\\n\",\n        \"      sample_weight = inputs[2]\\n\",\n        \"    else:\\n\",\n        \"      sample_weight = None\\n\",\n        \"    loss = self.loss_fn(y_true, y_pred, sample_weight)\\n\",\n        \"    self.add_loss(loss)\\n\",\n        \"    reduce_loss = tf.math.divide_no_nan(\\n\",\n        \"        tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)\\n\",\n        \"    return reduce_loss\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"thSmshFN94pt\"\n      },\n      \"source\": [\n        \"## Retrain the TFX model with the weighted model\\n\",\n        \"\\n\",\n        \"In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"Bb0Rl9UOFgoM\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"trainer_weighted = Trainer(\\n\",\n        \"    module_file=_trainer_module_file,\\n\",\n        \"    transformed_examples=transform.outputs['transformed_examples'],\\n\",\n        \"    schema=infer_schema.outputs['schema'],\\n\",\n        \"    transform_graph=transform.outputs['transform_graph'],\\n\",\n        \"    train_args=trainer_pb2.TrainArgs(num_steps=10000),\\n\",\n        \"    eval_args=trainer_pb2.EvalArgs(num_steps=5000)\\n\",\n        \")\\n\",\n        \"context.run(trainer_weighted)\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"n7xH61MCPwUO\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Again, we will run TensorFlow Model Analysis and load Fairness Indicators\\n\",\n        \"# to examine the performance change in our weighted model.\\n\",\n        \"model_analyzer_weighted = Evaluator(\\n\",\n        \"    examples=example_gen.outputs['examples'],\\n\",\n        \"    model=trainer_weighted.outputs['model'],\\n\",\n        \"\\n\",\n        \"    eval_config = text_format.Parse(\\\"\\\"\\\"\\n\",\n        \"      model_specs {\\n\",\n        \"        label_key: 'is_recid'\\n\",\n        \"      }\\n\",\n        \"      metrics_specs {\\n\",\n        \"        metrics {class_name: 'BinaryAccuracy'}\\n\",\n        \"        metrics {class_name: 'AUC'}\\n\",\n        \"        metrics {\\n\",\n        \"          class_name: 'FairnessIndicators'\\n\",\n        \"          config: '{\\\"thresholds\\\": [0.25, 0.5, 0.75]}'\\n\",\n        \"        }\\n\",\n        \"      }\\n\",\n        \"      slicing_specs {\\n\",\n        \"        feature_keys: 'race'\\n\",\n        \"      }\\n\",\n        \"    \\\"\\\"\\\", tfma.EvalConfig())\\n\",\n        \")\\n\",\n        \"context.run(model_analyzer_weighted)\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"206gQS1r-1FX\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri\\n\",\n        \"eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)\\n\",\n        \"\\n\",\n        \"multi_eval_results = {\\n\",\n        \"    'Unweighted Model': eval_result,\\n\",\n        \"    'Weighted Model': eval_result_weighted\\n\",\n        \"}\\n\",\n        \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(\\n\",\n        \"    multi_eval_results=multi_eval_results)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"bwoz69Wzvt8q\"\n      },\n      \"source\": [\n        \"After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.\\n\",\n        \"\\n\",\n        \"The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.\\n\",\n        \"\\n\",\n        \"\\n\",\n        \"* **False Positive Rate @ 0.75**\\n\",\n        \"  * **African-American:** ~1%\\n\",\n        \"     * AUC: 0.47\\n\",\n        \"     * Binary Accuracy: 0.59\\n\",\n        \"  * **Caucasian:** ~0%\\n\",\n        \"     * AUC: 0.47\\n\",\n        \"     * Binary Accuracy: 0.58\\n\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"oEhq3ne7gazf\"\n      },\n      \"source\": [\n        \"## Examine the data of the second run\\n\",\n        \"\\n\",\n        \"Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"WM-uqqfOggcw\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Pull the URI for the two models that we ran in this case study.\\n\",\n        \"first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri\\n\",\n        \"second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri\\n\",\n        \"\\n\",\n        \"# Load the stats for both models.\\n\",\n        \"first_model_uri = tfdv.load_statistics(os.path.join(\\n\",\n        \"    first_model_uri, 'eval/stats_tfrecord/'))\\n\",\n        \"second_model_stats = tfdv.load_statistics(os.path.join(\\n\",\n        \"    second_model_uri, 'eval/stats_tfrecord/'))\\n\",\n        \"\\n\",\n        \"# Visualize the statistics between the two models.\\n\",\n        \"tfdv.visualize_statistics(\\n\",\n        \"    lhs_statistics=second_model_stats,\\n\",\n        \"    lhs_name='Sampled Model',\\n\",\n        \"    rhs_statistics=first_model_uri,\\n\",\n        \"    rhs_name='COMPAS Orginal')\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"YOMbqITkhNkO\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Add a new note within ML Metadata describing the weighted model.\\n\",\n        \"_NOTE_TO_ADD = 'Weighted model between race and is_recid.'\\n\",\n        \"\\n\",\n        \"# Pulling the URI for the weighted trained model.\\n\",\n        \"second_trained_model = store.get_artifacts_by_type('Model')[-1]\\n\",\n        \"\\n\",\n        \"# Add the note to ML Metadata.\\n\",\n        \"second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD\\n\",\n        \"store.put_artifacts([second_trained_model])\\n\",\n        \"\\n\",\n        \"display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))\\n\",\n        \"display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"f0fGWt-OIzEb\"\n      },\n      \"source\": [\n        \"## Conclusion\\n\",\n        \"\\n\",\n        \"Within this case study we developed a Keras classifier within a TFX pipeline with the COMPAS dataset to examine any fairness concerns within the dataset. After initially developing the TFX, fairness concerns were not immediately apparent until examining the individual slices within our model by our sensitive features --in our case race. After identifying the issues, we were able to track down the source of the fairness issue with TensorFlow DataValidation to identify a method to mitigate the fairness concerns via model weighting while tracking and annotating the changes via ML Metadata. Although we are not able to fully fix all the fairness concerns within the dataset, by adding a note for future developers to follow will allow others to understand and issues we faced while developing this model. \\n\",\n        \"\\n\",\n        \"Finally it is important to note that this case study did not fix the fairness issues that are present in the COMPAS dataset. By improving the fairness concerns in the model we also reduced the AUC and accuracy in the performance of the model. What we were able to do, however, was build a model that showcased the fairness concerns and track down where the problems could be coming from by tracking or model's lineage while annotating any model concerns within the metadata.\\n\",\n        \"\\n\",\n        \"For more information on the issues that the predicting pre-trial detention can have see the FAT* 2018 talk on [\\\"Understanding the Context and Consequences of Pre-trial Detention\\\"](https://www.youtube.com/watch?v=hEThGT-_5ho\\u0026feature=youtu.be\\u0026t=1)\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"colab\": {\n      \"collapsed_sections\": [],\n      \"name\": \"Fairness Indicators Lineage Case Study\",\n      \"provenance\": [],\n      \"toc_visible\": true\n    },\n    \"kernelspec\": {\n      \"display_name\": \"Python 3\",\n      \"name\": \"python3\"\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "docs/tutorials/_toc.yaml",
    "content": "toc:\n- title: Introduction to Fairness Indicators\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Example_Colab\n- title: Evaluate fairness using TF-Hub models\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings\n- title: Visualize with TensorBoard Plugin\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab\n- title: Evaluate toxicity in Wiki comments\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study\n- title: TensorFlow constrained optimization example\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study\n- title: Pandas DataFrame case study\n  path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Pandas_Case_Study\n- title: FaceSSD example Colab\n  path: /responsible_ai/fairness_indicators/tutorials/Facessd_Fairness_Indicators_Example_Colab\n"
  },
  {
    "path": "fairness_indicators/__init__.py",
    "content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Init module for Fairness Indicators.\"\"\"\n\n# Import version string.\nfrom fairness_indicators.version import __version__\n"
  },
  {
    "path": "fairness_indicators/example_model.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Demo script to train and evaluate a model.\n\nThis scripts contains boilerplate code to train a Keras Text Classifier\nand evaluate it using Tensorflow Model Analysis. Evaluation\nresults can be visualized using tools like TensorBoard.\n\"\"\"\n\nfrom typing import Any\n\nimport tensorflow.compat.v1 as tf\nimport tensorflow_model_analysis as tfma\nfrom tensorflow import keras\n\nfrom fairness_indicators import fairness_indicators_metrics  # noqa: F401\n\nTEXT_FEATURE = \"comment_text\"\nLABEL = \"toxicity\"\nSLICE = \"slice\"\nFEATURE_MAP = {\n    LABEL: tf.io.FixedLenFeature([], tf.float32),\n    TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\n    SLICE: tf.io.VarLenFeature(tf.string),\n}\n\n\nclass ExampleParser(keras.layers.Layer):\n    \"\"\"A Keras layer that parses the tf.Example.\"\"\"\n\n    def __init__(self, input_feature_key):\n        self._input_feature_key = input_feature_key\n        self.input_spec = keras.layers.InputSpec(shape=(1,), dtype=tf.string)\n        super().__init__()\n\n    def compute_output_shape(self, input_shape: Any):\n        return [1, 1]\n\n    def call(self, serialized_examples):\n        def get_feature(serialized_example):\n            parsed_example = tf.io.parse_single_example(\n                serialized_example, features=FEATURE_MAP\n            )\n            return parsed_example[self._input_feature_key]\n\n        serialized_examples = tf.cast(serialized_examples, tf.string)\n        return tf.map_fn(get_feature, serialized_examples)\n\n\nclass Reshaper(keras.layers.Layer):\n    \"\"\"A Keras layer that reshapes the input.\"\"\"\n\n    def call(self, inputs):\n        return tf.reshape(inputs, (1, 32))\n\n\nclass Caster(keras.layers.Layer):\n    \"\"\"A Keras layer that reshapes the input.\"\"\"\n\n    def call(self, inputs):\n        return tf.cast(inputs, tf.float32)\n\n\ndef get_example_model(input_feature_key: str):\n    \"\"\"Returns a Keras model for testing.\"\"\"\n    parser = ExampleParser(input_feature_key)\n    text_vectorization = keras.layers.TextVectorization(\n        max_tokens=32,\n        output_mode=\"int\",\n        output_sequence_length=32,\n    )\n    text_vectorization.adapt(\n        [\"nontoxic\", \"toxic comment\", \"test comment\", \"abc\", \"abcdef\", \"random\"]\n    )\n    dense1 = keras.layers.Dense(\n        32,\n        activation=None,\n        use_bias=True,\n        kernel_initializer=\"glorot_uniform\",\n        bias_initializer=\"zeros\",\n    )\n    dense2 = keras.layers.Dense(\n        1,\n        activation=None,\n        use_bias=False,\n        kernel_initializer=\"glorot_uniform\",\n        bias_initializer=\"zeros\",\n    )\n\n    inputs = tf.keras.Input(shape=(), dtype=tf.string)\n    parsed_example = parser(inputs)\n    text_vector = text_vectorization(parsed_example)\n    text_vector = Reshaper()(text_vector)\n    text_vector = Caster()(text_vector)\n    output1 = dense1(text_vector)\n    output2 = dense2(output1)\n    return tf.keras.Model(inputs=inputs, outputs=output2)\n\n\ndef evaluate_model(\n    classifier_model_path,\n    validate_tf_file_path,\n    tfma_eval_result_path,\n    eval_config,\n):\n    \"\"\"Evaluate Model using Tensorflow Model Analysis.\n\n    Args:\n    ----\n      classifier_model_path: Trained classifier model to be evaluted.\n      validate_tf_file_path: File containing validation TFRecordDataset.\n      tfma_eval_result_path: Path to export tfma-related eval path.\n      eval_config: tfma eval_config.\n    \"\"\"\n    eval_shared_model = tfma.default_eval_shared_model(\n        eval_saved_model_path=classifier_model_path, eval_config=eval_config\n    )\n\n    # Run the fairness evaluation.\n    tfma.run_model_analysis(\n        eval_shared_model=eval_shared_model,\n        data_location=validate_tf_file_path,\n        output_path=tfma_eval_result_path,\n        eval_config=eval_config,\n    )\n"
  },
  {
    "path": "fairness_indicators/example_model_test.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests for example_model.py.\n\nIt also serves as an example of how to use fairness indicators with a Keras\nmodel.\n\"\"\"\n\nimport datetime\nimport os\nimport tempfile\n\nimport numpy as np\nimport six\nimport tensorflow.compat.v1 as tf\nimport tensorflow_model_analysis as tfma\nfrom google.protobuf import text_format\nfrom tensorflow import keras\n\nfrom fairness_indicators import example_model\n\ntf.compat.v1.enable_eager_execution()\n\n\nclass ExampleModelTest(tf.test.TestCase):\n    def setUp(self):\n        super(ExampleModelTest, self).setUp()\n        self._base_dir = tempfile.gettempdir()\n\n        self._model_dir = os.path.join(\n            self._base_dir, \"train\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n        )\n\n    def _create_example(self, comment_text, label, slice_value):\n        example = tf.train.Example()\n        example.features.feature[example_model.TEXT_FEATURE].bytes_list.value[:] = [\n            six.ensure_binary(comment_text, \"utf8\")\n        ]\n        example.features.feature[example_model.SLICE].bytes_list.value[:] = [\n            six.ensure_binary(slice_value, \"utf8\")\n        ]\n        example.features.feature[example_model.LABEL].float_list.value[:] = [label]\n        return example\n\n    def _create_data(self):\n        examples = []\n        examples.append(self._create_example(\"test comment\", 0.0, \"slice1\"))\n        examples.append(self._create_example(\"toxic comment\", 1.0, \"slice1\"))\n        examples.append(self._create_example(\"non-toxic comment\", 0.0, \"slice1\"))\n        examples.append(self._create_example(\"test comment\", 1.0, \"slice2\"))\n        examples.append(self._create_example(\"non-toxic comment\", 0.0, \"slice2\"))\n        examples.append(self._create_example(\"test comment\", 0.0, \"slice3\"))\n        examples.append(self._create_example(\"toxic comment\", 1.0, \"slice3\"))\n        examples.append(self._create_example(\"toxic comment\", 1.0, \"slice3\"))\n        examples.append(self._create_example(\"non toxic comment\", 0.0, \"slice3\"))\n        examples.append(self._create_example(\"abc\", 0.0, \"slice1\"))\n        examples.append(self._create_example(\"abcdef\", 0.0, \"slice3\"))\n        examples.append(self._create_example(\"random\", 0.0, \"slice1\"))\n        return examples\n\n    def _write_tf_records(self, examples):\n        data_location = os.path.join(self._base_dir, \"input_data.rio\")\n        with tf.io.TFRecordWriter(data_location) as writer:\n            for example in examples:\n                writer.write(example.SerializeToString())\n        return data_location\n\n    def test_example_model(self):\n        data = self._create_data()\n        classifier = example_model.get_example_model(example_model.TEXT_FEATURE)\n        classifier.compile(optimizer=keras.optimizers.Adam(), loss=\"mse\")\n        classifier.fit(\n            tf.constant([e.SerializeToString() for e in data]),\n            np.array(\n                [\n                    e.features.feature[example_model.LABEL].float_list.value[:][0]\n                    for e in data\n                ]\n            ),\n            batch_size=1,\n        )\n        tf.saved_model.save(classifier, self._model_dir)\n\n        eval_config = text_format.Parse(\n            \"\"\"\n        model_specs {\n          signature_name: \"serving_default\"\n          prediction_key: \"predictions\" # placeholder\n          label_key: \"toxicity\" # placeholder\n        }\n        slicing_specs {}\n        slicing_specs {\n          feature_keys: [\"slice\"]\n        }\n        metrics_specs {\n          metrics {\n            class_name: \"ExampleCount\"\n          }\n          metrics {\n            class_name: \"FairnessIndicators\"\n          }\n        }\n  \"\"\",\n            tfma.EvalConfig(),\n        )\n\n        validate_tf_file_path = self._write_tf_records(data)\n        tfma_eval_result_path = os.path.join(self._model_dir, \"tfma_eval_result\")\n        example_model.evaluate_model(\n            self._model_dir,\n            validate_tf_file_path,\n            tfma_eval_result_path,\n            eval_config,\n        )\n\n        evaluation_results = tfma.load_eval_result(tfma_eval_result_path)\n\n        expected_slice_keys = [\n            (),\n            ((\"slice\", \"slice1\"),),\n            ((\"slice\", \"slice2\"),),\n            ((\"slice\", \"slice3\"),),\n        ]\n        slice_keys = [slice_key for slice_key, _ in evaluation_results.slicing_metrics]\n        self.assertEqual(set(expected_slice_keys), set(slice_keys))\n        # Verify part of the metrics of fairness indicators\n        metric_values = dict(evaluation_results.slicing_metrics)[\n            ((\"slice\", \"slice1\"),)\n        ][\"\"][\"\"]\n        self.assertEqual(metric_values[\"example_count\"], {\"doubleValue\": 5.0})\n\n        self.assertEqual(\n            metric_values[\"fairness_indicators_metrics/false_positive_rate@0.1\"],\n            {\"doubleValue\": 0.0},\n        )\n        self.assertEqual(\n            metric_values[\"fairness_indicators_metrics/false_negative_rate@0.1\"],\n            {\"doubleValue\": 1.0},\n        )\n        self.assertEqual(\n            metric_values[\"fairness_indicators_metrics/true_positive_rate@0.1\"],\n            {\"doubleValue\": 0.0},\n        )\n        self.assertEqual(\n            metric_values[\"fairness_indicators_metrics/true_negative_rate@0.1\"],\n            {\"doubleValue\": 1.0},\n        )\n"
  },
  {
    "path": "fairness_indicators/fairness_indicators_metrics.py",
    "content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Fairness Indicators Metrics.\"\"\"\n\nimport collections\nfrom typing import Any, Dict, List, Optional, Sequence\n\nfrom tensorflow_model_analysis.metrics import (\n    binary_confusion_matrices,\n    metric_types,\n    metric_util,\n)\nfrom tensorflow_model_analysis.proto import config_pb2\n\nFAIRNESS_INDICATORS_METRICS_NAME = \"fairness_indicators_metrics\"\nFAIRNESS_INDICATORS_SUB_METRICS = (\n    \"false_positive_rate\",\n    \"false_negative_rate\",\n    \"true_positive_rate\",\n    \"true_negative_rate\",\n    \"positive_rate\",\n    \"negative_rate\",\n    \"false_discovery_rate\",\n    \"false_omission_rate\",\n    \"precision\",\n    \"recall\",\n)\n\nDEFAULT_THRESHOLDS = (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9)\n\n\nclass FairnessIndicators(metric_types.Metric):\n    \"\"\"Fairness indicators metrics.\"\"\"\n\n    def computations_with_logging(self):\n        \"\"\"Add streamz logging for fairness indicators.\"\"\"\n        computations_fn = metric_util.merge_per_key_computations(\n            _fairness_indicators_metrics_at_thresholds\n        )\n\n        def merge_and_log_computations_fn(\n            eval_config: Optional[config_pb2.EvalConfig] = None,\n            # A tf metadata schema.\n            schema: Optional[Any] = None,\n            model_names: Optional[List[str]] = None,\n            output_names: Optional[List[str]] = None,\n            sub_keys: Optional[List[Optional[metric_types.SubKey]]] = None,\n            aggregation_type: Optional[metric_types.AggregationType] = None,\n            class_weights: Optional[Dict[int, float]] = None,\n            example_weighted: bool = False,\n            query_key: Optional[str] = None,\n            **kwargs,\n        ):\n            return computations_fn(\n                eval_config,\n                schema,\n                model_names,\n                output_names,\n                sub_keys,\n                aggregation_type,\n                class_weights,\n                example_weighted,\n                query_key,\n                **kwargs,\n            )\n\n        return merge_and_log_computations_fn\n\n    def __init__(\n        self,\n        thresholds: Sequence[float] = DEFAULT_THRESHOLDS,\n        name: str = FAIRNESS_INDICATORS_METRICS_NAME,\n    ):\n        \"\"\"Initializes fairness indicators metrics.\n\n        Args:\n        ----\n          thresholds: Thresholds to use for fairness metrics.\n          name: Metric name.\n        \"\"\"\n        super().__init__(\n            self.computations_with_logging(), thresholds=thresholds, name=name\n        )\n\n\ndef calculate_digits(thresholds):\n    digits = [len(str(t)) - 2 for t in thresholds]\n    return max(max(digits), 1)\n\n\ndef _fairness_indicators_metrics_at_thresholds(\n    thresholds: List[float],\n    name: str = FAIRNESS_INDICATORS_METRICS_NAME,\n    eval_config: Optional[config_pb2.EvalConfig] = None,\n    model_name: str = \"\",\n    output_name: str = \"\",\n    aggregation_type: Optional[metric_types.AggregationType] = None,\n    sub_key: Optional[metric_types.SubKey] = None,\n    class_weights: Optional[Dict[int, float]] = None,\n    example_weighted: bool = False,\n) -> metric_types.MetricComputations:\n    \"\"\"Returns computations for fairness metrics at thresholds.\"\"\"\n    metric_key_by_name_by_threshold = collections.defaultdict(dict)\n    keys = []\n    digits_num = calculate_digits(thresholds)\n    for t in thresholds:\n        for m in FAIRNESS_INDICATORS_SUB_METRICS:\n            key = metric_types.MetricKey(\n                name=\"%s/%s@%.*f\"\n                % (\n                    name,\n                    m,\n                    digits_num,\n                    t,\n                ),  # e.g. \"fairness_indicators_metrics/positive_rate@0.5\"\n                model_name=model_name,\n                output_name=output_name,\n                sub_key=sub_key,\n                example_weighted=example_weighted,\n            )\n            keys.append(key)\n            metric_key_by_name_by_threshold[t][m] = key\n\n    # Make sure matrices are calculated.\n    computations = binary_confusion_matrices.binary_confusion_matrices(\n        eval_config=eval_config,\n        model_name=model_name,\n        output_name=output_name,\n        sub_key=sub_key,\n        aggregation_type=aggregation_type,\n        class_weights=class_weights,\n        example_weighted=example_weighted,\n        thresholds=thresholds,\n    )\n    confusion_matrices_key = computations[-1].keys[-1]\n\n    def result(\n        metrics: Dict[metric_types.MetricKey, Any],\n    ) -> Dict[metric_types.MetricKey, Any]:\n        \"\"\"Returns fairness metrics values.\"\"\"\n        metric = metrics[confusion_matrices_key]\n        output = {}\n\n        for i, threshold in enumerate(thresholds):\n            num_positives = metric.tp[i] + metric.fn[i]\n            num_negatives = metric.tn[i] + metric.fp[i]\n\n            tpr = metric.tp[i] / (num_positives or float(\"nan\"))\n            tnr = metric.tn[i] / (num_negatives or float(\"nan\"))\n            fpr = metric.fp[i] / (num_negatives or float(\"nan\"))\n            fnr = metric.fn[i] / (num_positives or float(\"nan\"))\n            pr = (metric.tp[i] + metric.fp[i]) / (\n                (num_positives + num_negatives) or float(\"nan\")\n            )\n            nr = (metric.tn[i] + metric.fn[i]) / (\n                (num_positives + num_negatives) or float(\"nan\")\n            )\n            precision = metric.tp[i] / ((metric.tp[i] + metric.fp[i]) or float(\"nan\"))\n            recall = metric.tp[i] / ((metric.tp[i] + metric.fn[i]) or float(\"nan\"))\n\n            fdr = metric.fp[i] / ((metric.fp[i] + metric.tp[i]) or float(\"nan\"))\n            fomr = metric.fn[i] / ((metric.fn[i] + metric.tn[i]) or float(\"nan\"))\n\n            output[\n                metric_key_by_name_by_threshold[threshold][\"false_positive_rate\"]\n            ] = fpr\n            output[\n                metric_key_by_name_by_threshold[threshold][\"false_negative_rate\"]\n            ] = fnr\n            output[metric_key_by_name_by_threshold[threshold][\"true_positive_rate\"]] = (\n                tpr\n            )\n            output[metric_key_by_name_by_threshold[threshold][\"true_negative_rate\"]] = (\n                tnr\n            )\n            output[metric_key_by_name_by_threshold[threshold][\"positive_rate\"]] = pr\n            output[metric_key_by_name_by_threshold[threshold][\"negative_rate\"]] = nr\n            output[\n                metric_key_by_name_by_threshold[threshold][\"false_discovery_rate\"]\n            ] = fdr\n            output[\n                metric_key_by_name_by_threshold[threshold][\"false_omission_rate\"]\n            ] = fomr\n            output[metric_key_by_name_by_threshold[threshold][\"precision\"]] = precision\n            output[metric_key_by_name_by_threshold[threshold][\"recall\"]] = recall\n\n        return output\n\n    derived_computation = metric_types.DerivedMetricComputation(\n        keys=keys, result=result\n    )\n\n    computations.append(derived_computation)\n    return computations\n\n\nmetric_types.register_metric(FairnessIndicators)\n"
  },
  {
    "path": "fairness_indicators/remediation/__init__.py",
    "content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n"
  },
  {
    "path": "fairness_indicators/remediation/weight_utils.py",
    "content": "\"\"\"Utilities to suggest weights based on model analysis results.\"\"\"\n\nfrom typing import Any, Dict, Mapping\n\nimport tensorflow_model_analysis as tfma\n\n\ndef create_percentage_difference_dictionary(\n    eval_result: tfma.EvalResult, baseline_name: str, metric_name: str\n) -> Dict[str, Any]:\n    \"\"\"Creates dictionary of a % difference between a baseline and other slices.\n\n    Args:\n    ----\n      eval_result: Loaded eval result from running TensorFlow Model Analysis.\n      baseline_name: Name of the baseline slice, 'Overall' or a specified tuple.\n      metric_name: Name of the metric on which to perform comparisons.\n\n    Returns:\n    -------\n      Dictionary mapping slices to percentage difference from the baseline slice.\n    \"\"\"\n    baseline_value = get_baseline_value(eval_result, baseline_name, metric_name)\n    difference = {}\n    for metrics_tuple in eval_result.slicing_metrics:\n        slice_key = metrics_tuple[0]\n        metrics = metrics_tuple[1]\n        # Concatenate feature name/values for intersectional features.\n        column = \"-\".join([elem[0] for elem in slice_key])\n        feature_val = \"-\".join([elem[1] for elem in slice_key])\n        if column not in difference:\n            difference[column] = {}\n        difference[column][feature_val] = (\n            _get_metric_value(metrics, metric_name) - baseline_value\n        ) / baseline_value\n    return difference\n\n\ndef _get_metric_value(\n    nested_dict: Mapping[str, Mapping[str, Any]], metric_name: str\n) -> float:\n    \"\"\"Returns the value of the named metric from a slice's metrics.\n\n    Args:\n    ----\n      nested_dict: Dictionary of metrics from slice.\n      metric_name: Value to return from the metric slice.\n\n    Returns:\n    -------\n      Percentage value of the baseline slice name requested.\n\n    Raises:\n    ------\n      KeyError: If the metric name isn't found in the metrics dictionary or if the\n        input metrics dictionary is empty.\n      TypeError: If an unsupported value type is found within dictionary slice.\n        passed.\n    \"\"\"\n    for value in nested_dict.values():\n        if metric_name in value[\"\"]:\n            typed_value = value[\"\"][metric_name]\n            if \"doubleValue\" in typed_value:\n                return typed_value[\"doubleValue\"]\n            if \"boundedValue\" in typed_value:\n                return typed_value[\"boundedValue\"][\"value\"]\n            raise TypeError(\"Unsupported value type: %s\" % typed_value)\n        else:\n            raise KeyError(\n                \"Key %s not found in %s\" % (metric_name, list(value[\"\"].keys()))\n            )\n    raise KeyError(\n        \"Unable to return a metric value because the dictionary passed is empty.\"\n    )\n\n\ndef get_baseline_value(\n    eval_result: tfma.EvalResult, baseline_name: str, metric_name: str\n) -> float:\n    \"\"\"Looks through the evaluation result for the value of the baseline slice.\n\n    Args:\n    ----\n      eval_result: Loaded eval result from running TensorFlow Model Analysis.\n      baseline_name: Name of the baseline slice, 'Overall' or a specified tuple.\n      metric_name: Name of the metric on which to perform comparisons.\n\n    Returns:\n    -------\n      Percentage value of the baseline slice name requested.\n\n    Raises:\n    ------\n      Value error if the baseline slice is not found in eval_results.\n    \"\"\"\n    for metrics_tuple in eval_result.slicing_metrics:\n        slice_tuple = metrics_tuple[0]\n        if baseline_name == \"Overall\" and not slice_tuple:\n            return _get_metric_value(metrics_tuple[1], metric_name)\n        if baseline_name == slice_tuple:\n            return _get_metric_value(metrics_tuple[1], metric_name)\n    raise ValueError(\n        \"Could not find baseline %s in eval_result: %s\" % (baseline_name, eval_result)\n    )\n"
  },
  {
    "path": "fairness_indicators/remediation/weight_utils_test.py",
    "content": "\"\"\"Tests for fairness_indicators.remediation.weight_utils.\"\"\"\n\nimport collections\n\nimport tensorflow.compat.v1 as tf\n\nfrom fairness_indicators.remediation import weight_utils\n\nEvalResult = collections.namedtuple(\"EvalResult\", [\"slicing_metrics\"])\n\n\nclass WeightUtilsTest(tf.test.TestCase):\n    def create_eval_result(self):\n        return EvalResult(\n            slicing_metrics=[\n                (\n                    (),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"doubleValue\": 0.08\n                                },\n                                \"accuracy\": {\"doubleValue\": 0.444},\n                            }\n                        }\n                    },\n                ),\n                (\n                    ((\"gender\", \"female\"),),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"doubleValue\": 0.09\n                                },\n                                \"accuracy\": {\"doubleValue\": 0.333},\n                            }\n                        }\n                    },\n                ),\n                (\n                    (\n                        (\"gender\", \"female\"),\n                        (\"sexual_orientation\", \"homosexual_gay_or_lesbian\"),\n                    ),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"doubleValue\": 0.1\n                                },\n                                \"accuracy\": {\"doubleValue\": 0.222},\n                            }\n                        }\n                    },\n                ),\n            ]\n        )\n\n    def create_bounded_result(self):\n        return EvalResult(\n            slicing_metrics=[\n                (\n                    (),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.08,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                                \"accuracy\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.444,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                            }\n                        }\n                    },\n                ),\n                (\n                    ((\"gender\", \"female\"),),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.09,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                                \"accuracy\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.333,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                            }\n                        }\n                    },\n                ),\n                (\n                    (\n                        (\"gender\", \"female\"),\n                        (\"sexual_orientation\", \"homosexual_gay_or_lesbian\"),\n                    ),\n                    {\n                        \"\": {\n                            \"\": {\n                                \"post_export_metrics/negative_rate@0.10\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.1,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                                \"accuracy\": {\n                                    \"boundedValue\": {\n                                        \"lowerBound\": 0.07,\n                                        \"upperBound\": 0.09,\n                                        \"value\": 0.222,\n                                        \"methodology\": \"POISSON_BOOTSTRAP\",\n                                    }\n                                },\n                            }\n                        }\n                    },\n                ),\n            ]\n        )\n\n    def test_baseline(self):\n        test_eval_result = self.create_eval_result()\n        self.assertEqual(\n            0.08,\n            weight_utils.get_baseline_value(\n                test_eval_result, \"Overall\", \"post_export_metrics/negative_rate@0.10\"\n            ),\n        )\n        self.assertEqual(\n            0.09,\n            weight_utils.get_baseline_value(\n                test_eval_result,\n                ((\"gender\", \"female\"),),\n                \"post_export_metrics/negative_rate@0.10\",\n            ),\n        )\n        # Test 'accuracy'.\n        self.assertEqual(\n            0.444,\n            weight_utils.get_baseline_value(test_eval_result, \"Overall\", \"accuracy\"),\n        )\n        # Test intersectional metrics.\n        self.assertEqual(\n            0.222,\n            weight_utils.get_baseline_value(\n                test_eval_result,\n                (\n                    (\"gender\", \"female\"),\n                    (\"sexual_orientation\", \"homosexual_gay_or_lesbian\"),\n                ),\n                \"accuracy\",\n            ),\n        )\n        with self.assertRaises(ValueError):\n            # Test slice not found.\n            weight_utils.get_baseline_value(\n                test_eval_result, ((\"nonexistant\", \"slice\"),), \"accuracy\"\n            )\n        with self.assertRaises(KeyError):\n            # Test metric not found.\n            weight_utils.get_baseline_value(\n                test_eval_result, ((\"gender\", \"female\"),), \"nonexistent_metric\"\n            )\n\n    def test_get_metric_value_raise_key_error(self):\n        input_dict = {\"\": {\"\": {\"accuracy\": 0.1}}}\n        metric_name = \"nonexistent_metric\"\n        with self.assertRaises(KeyError):\n            weight_utils._get_metric_value(input_dict, metric_name)\n\n    def test_get_metric_value_raise_unsupported_value(self):\n        input_dict = {\"\": {\"\": {\"accuracy\": {\"boundedValue\": {1}}}}}\n        metric_name = \"accuracy\"\n        with self.assertRaises(TypeError):\n            weight_utils._get_metric_value(input_dict, metric_name)\n\n    def test_get_metric_value_raise_empty_dict(self):\n        with self.assertRaises(KeyError):\n            weight_utils._get_metric_value({}, \"metric_name\")\n\n    def test_create_difference_dictionary(self):\n        test_eval_result = self.create_eval_result()\n        res = weight_utils.create_percentage_difference_dictionary(\n            test_eval_result, \"Overall\", \"post_export_metrics/negative_rate@0.10\"\n        )\n        self.assertEqual(3, len(res))\n        self.assertIn(\"gender-sexual_orientation\", res)\n        self.assertIn(\"gender\", res)\n        self.assertAlmostEqual(res[\"gender\"][\"female\"], 0.125)\n        self.assertAlmostEqual(res[\"\"][\"\"], 0)\n\n    def test_create_difference_dictionary_baseline(self):\n        test_eval_result = self.create_eval_result()\n        res = weight_utils.create_percentage_difference_dictionary(\n            test_eval_result,\n            ((\"gender\", \"female\"),),\n            \"post_export_metrics/negative_rate@0.10\",\n        )\n        self.assertEqual(3, len(res))\n        self.assertIn(\"gender-sexual_orientation\", res)\n        self.assertIn(\"gender\", res)\n        self.assertAlmostEqual(res[\"gender\"][\"female\"], 0)\n        self.assertAlmostEqual(res[\"\"][\"\"], -0.11111111)\n\n    def test_create_difference_dictionary_bounded_metrics(self):\n        test_eval_result = self.create_bounded_result()\n        res = weight_utils.create_percentage_difference_dictionary(\n            test_eval_result, \"Overall\", \"post_export_metrics/negative_rate@0.10\"\n        )\n        self.assertEqual(3, len(res))\n        self.assertIn(\"gender-sexual_orientation\", res)\n        self.assertIn(\"gender\", res)\n        self.assertAlmostEqual(res[\"gender\"][\"female\"], 0.125)\n        self.assertAlmostEqual(res[\"\"][\"\"], 0)\n"
  },
  {
    "path": "fairness_indicators/test_cases/dlvm/fairness_indicators_dlvm_test_case.ipynb",
    "content": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"Tce3stUlHN0L\"\n      },\n      \"source\": [\n        \"##### Copyright 2020 The TensorFlow Authors.\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"cellView\": \"form\",\n        \"id\": \"tuOe1ymfHZPu\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"#@title Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\\n\",\n        \"# you may not use this file except in compliance with the License.\\n\",\n        \"# You may obtain a copy of the License at\\n\",\n        \"#\\n\",\n        \"# https://www.apache.org/licenses/LICENSE-2.0\\n\",\n        \"#\\n\",\n        \"# Unless required by applicable law or agreed to in writing, software\\n\",\n        \"# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\\n\",\n        \"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\\n\",\n        \"# See the License for the specific language governing permissions and\\n\",\n        \"# limitations under the License.\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"aalPefrUUplk\"\n      },\n      \"source\": [\n        \"# Fairness Indicators DLVM Test Case\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"u33JXdluZ2lG\"\n      },\n      \"source\": [\n        \"## Setup\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"B8dlyTyiTe-9\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"import tensorflow as tf\\n\",\n        \"print('TF version: {}'.format(tf.__version__))\\n\",\n        \"\\n\",\n        \"import tensorflow_model_analysis as tfma\\n\",\n        \"print('TFMA version: {}'.format(tfma.__version__))\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"HG4ww5SwVUaq\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Download the tar file from GCP and extract it\\n\",\n        \"import io, os, tempfile\\n\",\n        \"TAR_NAME = 'saved_models-2.2'\\n\",\n        \"BASE_DIR = tempfile.mkdtemp()\\n\",\n        \"DATA_DIR = os.path.join(BASE_DIR, TAR_NAME, 'data')\\n\",\n        \"MODELS_DIR = os.path.join(BASE_DIR, TAR_NAME, 'models')\\n\",\n        \"SCHEMA = os.path.join(BASE_DIR, TAR_NAME, 'schema.pbtxt')\\n\",\n        \"OUTPUT_DIR = os.path.join(BASE_DIR, 'output')\\n\",\n        \"\\n\",\n        \"!curl -O https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/{TAR_NAME}.tar\\n\",\n        \"!tar xf {TAR_NAME}.tar\\n\",\n        \"!mv {TAR_NAME} {BASE_DIR}\\n\",\n        \"!rm {TAR_NAME}.tar\\n\",\n        \"\\n\",\n        \"print(\\\"Here's what we downloaded:\\\")\\n\",\n        \"!ls -R {BASE_DIR}\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"h8i1NGecVZv1\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"from google.protobuf import text_format\\n\",\n        \"from tensorflow.python.lib.io import file_io\\n\",\n        \"from tensorflow_metadata.proto.v0 import schema_pb2\\n\",\n        \"from tensorflow.core.example import example_pb2\\n\",\n        \"\\n\",\n        \"schema = schema_pb2.Schema()\\n\",\n        \"contents = file_io.read_file_to_string(SCHEMA)\\n\",\n        \"schema = text_format.Parse(contents, schema)\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"BPg2wEx_Vk3o\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"import csv\\n\",\n        \"\\n\",\n        \"datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')\\n\",\n        \"reader = csv.DictReader(open(datafile, 'r'))\\n\",\n        \"examples = []\\n\",\n        \"for line in reader:\\n\",\n        \"  example = example_pb2.Example()\\n\",\n        \"  for feature in schema.feature:\\n\",\n        \"    key = feature.name\\n\",\n        \"    if feature.type == schema_pb2.FLOAT:\\n\",\n        \"      example.features.feature[key].float_list.value[:] = (\\n\",\n        \"          [float(line[key])] if len(line[key]) \\u003e 0 else [])\\n\",\n        \"    elif feature.type == schema_pb2.INT:\\n\",\n        \"      example.features.feature[key].int64_list.value[:] = (\\n\",\n        \"          [int(line[key])] if len(line[key]) \\u003e 0 else [])\\n\",\n        \"    elif feature.type == schema_pb2.BYTES:\\n\",\n        \"      example.features.feature[key].bytes_list.value[:] = (\\n\",\n        \"          [line[key].encode('utf8')] if len(line[key]) \\u003e 0 else [])\\n\",\n        \"  # Add a new column 'big_tipper' that indicates if tips was \\u003e 20% of the fare. \\n\",\n        \"  # TODO(b/157064428): Remove after label transformation is supported for Keras.\\n\",\n        \"  big_tipper = float(line['tips']) \\u003e float(line['fare']) * 0.2\\n\",\n        \"  example.features.feature['big_tipper'].float_list.value[:] = [big_tipper]\\n\",\n        \"  examples.append(example)\\n\",\n        \"\\n\",\n        \"tfrecord_file = os.path.join(BASE_DIR, 'train_data.rio')\\n\",\n        \"with tf.io.TFRecordWriter(tfrecord_file) as writer:\\n\",\n        \"  for example in examples:\\n\",\n        \"    writer.write(example.SerializeToString())\\n\",\n        \"\\n\",\n        \"!ls {tfrecord_file}\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"B8AQqw20YAB9\"\n      },\n      \"source\": [\n        \"## Run Fairness Indicators and TFMA\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"RhN80nIvVn49\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"# Setup tfma.EvalConfig settings\\n\",\n        \"keras_eval_config = text_format.Parse(\\\"\\\"\\\"\\n\",\n        \"  ## Model information\\n\",\n        \"  model_specs {\\n\",\n        \"    # For keras (and serving models) we need to add a `label_key`.\\n\",\n        \"    label_key: \\\"big_tipper\\\"\\n\",\n        \"  }\\n\",\n        \"\\n\",\n        \"  ## Post training metric information. These will be merged with any built-in\\n\",\n        \"  ## metrics from training.\\n\",\n        \"  metrics_specs {\\n\",\n        \"    metrics { class_name: \\\"ExampleCount\\\" }\\n\",\n        \"    metrics { class_name: \\\"BinaryAccuracy\\\" }\\n\",\n        \"    metrics { class_name: \\\"AUC\\\" }\\n\",\n        \"    metrics { class_name: \\\"MeanLabel\\\" }\\n\",\n        \"    metrics { class_name: \\\"MeanPrediction\\\" }\\n\",\n        \"    metrics {\\n\",\n        \"          class_name: \\\"FairnessIndicators\\\"\\n\",\n        \"          config: '{ \\\"thresholds\\\": [0.3, 0.5, 0.7] }'\\n\",\n        \"    }\\n\",\n        \"  }\\n\",\n        \"\\n\",\n        \"  ## Slicing information\\n\",\n        \"  slicing_specs {}  # overall slice\\n\",\n        \"  slicing_specs {\\n\",\n        \"    feature_keys: [\\\"trip_start_hour\\\"]\\n\",\n        \"  }\\n\",\n        \"  slicing_specs {\\n\",\n        \"    feature_keys: [\\\"trip_start_day\\\"]\\n\",\n        \"  }\\n\",\n        \"  slicing_specs {\\n\",\n        \"    feature_values: {\\n\",\n        \"      key: \\\"trip_start_month\\\"\\n\",\n        \"      value: \\\"1\\\"\\n\",\n        \"    }\\n\",\n        \"  }\\n\",\n        \"  slicing_specs {\\n\",\n        \"    feature_keys: [\\\"trip_start_hour\\\", \\\"trip_start_day\\\"]\\n\",\n        \"  }\\n\",\n        \"\\\"\\\"\\\", tfma.EvalConfig())\\n\",\n        \"\\n\",\n        \"# Create a tfma.EvalSharedModel that points at our keras model.\\n\",\n        \"keras_model_path = os.path.join(MODELS_DIR, 'keras', '2')\\n\",\n        \"keras_eval_shared_model = tfma.default_eval_shared_model(\\n\",\n        \"    eval_saved_model_path=keras_model_path,\\n\",\n        \"    eval_config=keras_eval_config)\\n\",\n        \"\\n\",\n        \"keras_output_path = os.path.join(OUTPUT_DIR, 'keras')\\n\",\n        \"\\n\",\n        \"# Run TFMA\\n\",\n        \"keras_eval_result = tfma.run_model_analysis(\\n\",\n        \"    eval_shared_model=keras_eval_shared_model,\\n\",\n        \"    eval_config=keras_eval_config,\\n\",\n        \"    data_location=tfrecord_file,\\n\",\n        \"    output_path=keras_output_path)\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"ktlASJQIzE3l\"\n      },\n      \"source\": [\n        \"## Render Fairness Indicators\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"ul0Ud9TVWB_b\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result=keras_eval_result)\"\n      ]\n    }\n  ],\n  \"metadata\": {\n    \"accelerator\": \"GPU\",\n    \"colab\": {\n      \"collapsed_sections\": [],\n      \"name\": \"Fairness Indicators DLVM Test Case.ipynb\",\n      \"private_outputs\": true,\n      \"provenance\": [\n      ],\n      \"toc_visible\": true\n    },\n    \"kernelspec\": {\n      \"display_name\": \"Python 3\",\n      \"name\": \"python3\"\n    }\n  },\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0\n}\n"
  },
  {
    "path": "fairness_indicators/test_cases/dlvm/fi_test_installed.sh",
    "content": "#!/bin/bash\n#\n# Copyright 2021 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# A script to test a Fairness Indicators JupyterLab in the current environment.\n#\n# Internally this script is used to test Fairness Indicators installation on DLVM/DL Container\n# images.\n# - https://cloud.google.com/deep-learning-vm\n# - https://cloud.google.com/ai-platform/deep-learning-containers\n#\n# The list of the container images can be found in:\n# https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container\n#\n\nnotebook_test() {\n  FILENAME=$1\n  OUTPUT_FILENAME=\"results_${1}\"\n\n  if ! papermill --no-progress-bar --no-log-output \"$FILENAME\" \"$OUTPUT_FILENAME\"; then\n    echo \"Notebook test failed. Unable to run the test using papermill for the file: ${FILENAME}\"\n    exit 1\n  fi\n}\n\nset -ex\n\nPYTHON_BINARY=$(which python)\n\nTENSORFLOW_VERSION=$(${PYTHON_BINARY} -c 'import tensorflow; print(tensorflow.__version__)')\n{\n  FI_VERSION=$(${PYTHON_BINARY} -c 'import fairness_indicators; print(fairness_indicators.__version__)');\n} || {\n  if [[ \"${TENSORFLOW_VERSION}\" == 2.3.* ||  \"${TENSORFLOW_VERSION}\" == 2.4.* ]]; then\n    echo \"ERROR: Fairness Indicators should be installed on Tensorflow ${TENSORFLOW_VERSION}\"\n    exit 1\n  else\n    echo \"Fairness Indicators is not installed on Tensorflow ${TENSORFLOW_VERSION}\"\n    exit 0\n  fi\n}\n\nif [[ \"${FI_VERSION}\" != *dev* ]]; then\n  VERSION_TAG_FLAG=\"-b v${FI_VERSION} --depth 1\"\nfi\n\nrm -rf fairness-indicators\n\n# Check FI v0.26.* with TF 2.3.*\nif [[ \"${TENSORFLOW_VERSION}\" == 2.3.* ]]; then\n  if [[ \"${FI_VERSION}\" > 0.26.* && \"${FI_VERSION}\" < 0.27.* ]]; then\n    # The test cases is added after 0.27.0.\n    git clone -b v0.27.0 --depth 1 https://github.com/tensorflow/fairness-indicators.git\n  else\n    echo \"ERROR: Fairness Indicators ${FI_VERSION} should not be installed on Tensorflow ${TENSORFLOW_VERSION}.\"\n    exit 1\n  fi\n\n# Check FI v0.27.* with TF 2.4.*\nelif [[ \"${TENSORFLOW_VERSION}\" == 2.4.* ]]; then\n  if [[ \"${FI_VERSION}\" > 0.27.* ]]; then\n    git clone ${VERSION_TAG_FLAG} https://github.com/tensorflow/fairness-indicators.git\n  else\n    echo \"ERROR: Fairness Indicators ${FI_VERSION} should not be installed on Tensorflow ${TENSORFLOW_VERSION}.\"\n    exit 1\n  fi\n\nelse\n  echo \"Fairness Indicators should not be installed on Tensorflow ${TENSORFLOW_VERSION}.\"\n  exit 0\nfi\n\ncd fairness-indicators/fairness_indicators/test_cases/dlvm/\nnotebook_test fairness_indicators_dlvm_test_case.ipynb\n"
  },
  {
    "path": "fairness_indicators/tutorial_utils/__init__.py",
    "content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Init file for fairness_indicators.tutorial_utils.\"\"\"\n\nfrom fairness_indicators.tutorial_utils.util import (\n    convert_comments_data,\n    get_eval_results,\n)\n"
  },
  {
    "path": "fairness_indicators/tutorial_utils/util.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Util methods for the example colabs.\"\"\"\n\nimport os\nimport os.path\nimport tempfile\n\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\nfrom google.protobuf import text_format\n\nTEXT_FEATURE = \"comment_text\"\nLABEL = \"toxicity\"\n\nSEXUAL_ORIENTATION_COLUMNS = [\n    \"heterosexual\",\n    \"homosexual_gay_or_lesbian\",\n    \"bisexual\",\n    \"other_sexual_orientation\",\n]\nGENDER_COLUMNS = [\"male\", \"female\", \"transgender\", \"other_gender\"]\nRELIGION_COLUMNS = [\n    \"christian\",\n    \"jewish\",\n    \"muslim\",\n    \"hindu\",\n    \"buddhist\",\n    \"atheist\",\n    \"other_religion\",\n]\nRACE_COLUMNS = [\"black\", \"white\", \"asian\", \"latino\", \"other_race_or_ethnicity\"]\nDISABILITY_COLUMNS = [\n    \"physical_disability\",\n    \"intellectual_or_learning_disability\",\n    \"psychiatric_or_mental_illness\",\n    \"other_disability\",\n]\n\nIDENTITY_COLUMNS = {\n    \"gender\": GENDER_COLUMNS,\n    \"sexual_orientation\": SEXUAL_ORIENTATION_COLUMNS,\n    \"religion\": RELIGION_COLUMNS,\n    \"race\": RACE_COLUMNS,\n    \"disability\": DISABILITY_COLUMNS,\n}\n\n_THRESHOLD = 0.5\n\n\ndef convert_comments_data(input_filename, output_filename=None):\n    \"\"\"Convert the public civil comments data.\n\n    In the orginal dataset\n    https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data\n    for each indentity annotation columns, the value comes\n    from percent of raters thought the comment referenced the identity. When\n    processing the raw data, the threshold 0.5 is chosen and the identity terms\n    are grouped together by their categories. For example if one comment has {\n    male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8,\n    homosexual_gay_or_lesbian: 1.0 }. After the processing, the data will be {\n    gender: [female], sexual_orientation: [heterosexual,\n    homosexual_gay_or_lesbian] }.\n\n    Args:\n    ----\n      input_filename: The path to the raw civil comments data, with extension\n        'tfrecord' or 'csv'.\n      output_filename: The path to write the processed civil comments data.\n\n    Returns:\n    -------\n      The file path to the converted dataset.\n\n    Raises:\n    ------\n      ValueError: If the input_filename does not have a supported extension.\n    \"\"\"\n    extension = os.path.splitext(input_filename)[1][1:]\n\n    if not output_filename:\n        output_filename = os.path.join(tempfile.mkdtemp(), \"output.\" + extension)\n\n    if extension == \"tfrecord\":\n        return _convert_comments_data_tfrecord(input_filename, output_filename)\n    elif extension == \"csv\":\n        return _convert_comments_data_csv(input_filename, output_filename)\n\n    raise ValueError(\n        \"input_filename must have supported file extension csv or tfrecord, \"\n        f\"given: {input_filename}\"\n    )\n\n\ndef _convert_comments_data_tfrecord(input_filename, output_filename=None):\n    \"\"\"Convert the public civil comments data, for tfrecord data.\"\"\"\n    with tf.io.TFRecordWriter(output_filename) as writer:\n        for serialized in tf.data.TFRecordDataset(filenames=[input_filename]):\n            example = tf.train.Example()\n            example.ParseFromString(serialized.numpy())\n            if not example.features.feature[TEXT_FEATURE].bytes_list.value:\n                continue\n\n            new_example = tf.train.Example()\n            new_example.features.feature[TEXT_FEATURE].bytes_list.value.extend(\n                example.features.feature[TEXT_FEATURE].bytes_list.value\n            )\n            new_example.features.feature[LABEL].float_list.value.append(\n                1\n                if example.features.feature[LABEL].float_list.value[0] >= _THRESHOLD\n                else 0\n            )\n\n            for identity_category, identity_list in IDENTITY_COLUMNS.items():\n                grouped_identity = []\n                for identity in identity_list:\n                    if (\n                        example.features.feature[identity].float_list.value\n                        and example.features.feature[identity].float_list.value[0]\n                        >= _THRESHOLD\n                    ):\n                        grouped_identity.append(identity.encode())\n                new_example.features.feature[identity_category].bytes_list.value.extend(\n                    grouped_identity\n                )\n            writer.write(new_example.SerializeToString())\n\n    return output_filename\n\n\ndef _convert_comments_data_csv(input_filename, output_filename=None):\n    \"\"\"Convert the public civil comments data, for csv data.\"\"\"\n    df = pd.read_csv(input_filename)\n\n    # Filter out rows with empty comment text values.\n    df = df[df[TEXT_FEATURE].ne(\"\")]\n    df = df[df[TEXT_FEATURE].notnull()]\n\n    new_df = pd.DataFrame()\n    new_df[TEXT_FEATURE] = df[TEXT_FEATURE]\n\n    # Reduce the label to value 0 or 1.\n    new_df[LABEL] = df[LABEL].ge(_THRESHOLD).astype(int)\n\n    # Extract the list of all identity terms that exceed the threshold.\n    def identity_conditions(df, identity_list):\n        group = []\n        for identity in identity_list:\n            if df[identity] >= _THRESHOLD:\n                group.append(identity)\n        return group\n\n    for identity_category, identity_list in IDENTITY_COLUMNS.items():\n        new_df[identity_category] = df.apply(\n            identity_conditions, args=((identity_list),), axis=1\n        )\n\n    new_df.to_csv(\n        output_filename,\n        header=[TEXT_FEATURE, LABEL, *IDENTITY_COLUMNS.keys()],\n        index=False,\n    )\n\n    return output_filename\n\n\ndef get_eval_results(\n    model_location,\n    eval_result_path,\n    validate_tfrecord_file,\n    slice_selection=\"religion\",\n    thresholds=None,\n    compute_confidence_intervals=True,\n):\n    \"\"\"Get Fairness Indicators eval results.\"\"\"\n    if thresholds is None:\n        thresholds = [0.4, 0.4125, 0.425, 0.4375, 0.45, 0.4675, 0.475, 0.4875, 0.5]\n\n    # Define slices that you want the evaluation to run on.\n    eval_config = text_format.Parse(\n        \"\"\"\n    model_specs {\n     label_key: '%s'\n   }\n   metrics_specs {\n     metrics {class_name: \"AUC\"}\n     metrics {class_name: \"ExampleCount\"}\n     metrics {class_name: \"Accuracy\"}\n     metrics {\n        class_name: \"FairnessIndicators\"\n        config: '{\"thresholds\": %s}'\n     }\n   }\n   slicing_specs {\n     feature_keys: '%s'\n   }\n   slicing_specs {}\n   options {\n       compute_confidence_intervals { value: %s }\n       disabled_outputs{values: \"analysis\"}\n   }\n   \"\"\"\n        % (\n            LABEL,\n            thresholds,\n            slice_selection,\n            \"true\" if compute_confidence_intervals else \"false\",\n        ),\n        tfma.EvalConfig(),\n    )\n\n    eval_shared_model = tfma.default_eval_shared_model(\n        eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING]\n    )\n\n    # Run the fairness evaluation.\n    return tfma.run_model_analysis(\n        eval_shared_model=eval_shared_model,\n        data_location=validate_tfrecord_file,\n        file_format=\"tfrecords\",\n        eval_config=eval_config,\n        output_path=eval_result_path,\n        extractors=None,\n    )\n"
  },
  {
    "path": "fairness_indicators/tutorial_utils/util_test.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests for fairness_indicators.tutorial_utils.util.\"\"\"\n\nimport csv\nimport os\nimport tempfile\nfrom unittest import mock\n\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\nfrom google.protobuf import text_format\n\nfrom fairness_indicators.tutorial_utils import util\n\n\nclass UtilTest(tf.test.TestCase):\n    def _create_example_tfrecord(self):\n        example = text_format.Parse(\n            \"\"\"\n        features {\n          feature { key: \"comment_text\"\n                    value { bytes_list { value: [ \"comment 1\" ] }}\n                  }\n          feature { key: \"toxicity\" value { float_list { value: [ 0.1 ] }}}\n\n          feature { key: \"heterosexual\" value { float_list { value: [ 0.1 ] }}}\n          feature { key: \"homosexual_gay_or_lesbian\"\n                    value { float_list { value: [ 0.1 ] }}\n                  }\n          feature { key: \"bisexual\" value { float_list { value: [ 0.5 ] }}}\n          feature { key: \"other_sexual_orientation\"\n                    value { float_list { value: [ 0.1 ] }}\n                  }\n\n          feature { key: \"male\" value { float_list { value: [ 0.1 ] }}}\n          feature { key: \"female\" value { float_list { value: [ 0.2 ] }}}\n          feature { key: \"transgender\" value { float_list { value: [ 0.3 ] }}}\n          feature { key: \"other_gender\" value { float_list { value: [ 0.4 ] }}}\n\n          feature { key: \"christian\" value { float_list { value: [ 0.0 ] }}}\n          feature { key: \"jewish\" value { float_list { value: [ 0.1 ] }}}\n          feature { key: \"muslim\" value { float_list { value: [ 0.2 ] }}}\n          feature { key: \"hindu\" value { float_list { value: [ 0.3 ] }}}\n          feature { key: \"buddhist\" value { float_list { value: [ 0.4 ] }}}\n          feature { key: \"atheist\" value { float_list { value: [ 0.5 ] }}}\n          feature { key: \"other_religion\"\n                    value { float_list { value: [ 0.6 ] }}\n                  }\n\n          feature { key: \"black\" value { float_list { value: [ 0.1 ] }}}\n          feature { key: \"white\" value { float_list { value: [ 0.2 ] }}}\n          feature { key: \"asian\" value { float_list { value: [ 0.3 ] }}}\n          feature { key: \"latino\" value { float_list { value: [ 0.4 ] }}}\n          feature { key: \"other_race_or_ethnicity\"\n                    value { float_list { value: [ 0.5 ] }}\n                  }\n\n          feature { key: \"physical_disability\"\n                    value { float_list { value: [ 0.6 ] }}\n                  }\n          feature { key: \"intellectual_or_learning_disability\"\n                    value { float_list { value: [ 0.7 ] }}\n                  }\n          feature { key: \"psychiatric_or_mental_illness\"\n                    value { float_list { value: [ 0.8 ] }}\n                  }\n          feature { key: \"other_disability\"\n                    value { float_list { value: [ 1.0 ] }}\n                  }\n        }\n        \"\"\",\n            tf.train.Example(),\n        )\n        empty_comment_example = text_format.Parse(\n            \"\"\"\n        features {\n          feature { key: \"comment_text\"\n                    value { bytes_list {} }\n                  }\n          feature { key: \"toxicity\" value { float_list { value: [ 0.1 ] }}}\n        }\n        \"\"\",\n            tf.train.Example(),\n        )\n        return [example, empty_comment_example]\n\n    def _write_tf_records(self, examples):\n        filename = os.path.join(tempfile.mkdtemp(), \"input.tfrecord\")\n        with tf.io.TFRecordWriter(filename) as writer:\n            for e in examples:\n                writer.write(e.SerializeToString())\n        return filename\n\n    def test_convert_data_tfrecord(self):\n        input_file = self._write_tf_records(self._create_example_tfrecord())\n        output_file = util.convert_comments_data(input_file)\n        output_example_list = []\n        for serialized in tf.data.TFRecordDataset(filenames=[output_file]):\n            output_example = tf.train.Example()\n            output_example.ParseFromString(serialized.numpy())\n            output_example_list.append(output_example)\n\n        self.assertEqual(len(output_example_list), 1)\n        self.assertEqual(\n            output_example_list[0],\n            text_format.Parse(\n                \"\"\"\n        features {\n          feature { key: \"comment_text\"\n                    value { bytes_list {value: [ \"comment 1\" ] }}\n                  }\n          feature { key: \"toxicity\" value { float_list { value: [ 0.0 ] }}}\n          feature { key: \"sexual_orientation\"\n                    value { bytes_list { value: [\"bisexual\"] }}\n                  }\n          feature { key: \"gender\" value { bytes_list { }}}\n          feature { key: \"race\"\n                    value { bytes_list { value: [ \"other_race_or_ethnicity\" ] }}\n                  }\n          feature { key: \"religion\"\n                    value { bytes_list {\n                      value: [  \"atheist\", \"other_religion\" ] }\n                    }\n                  }\n          feature { key: \"disability\" value { bytes_list {\n                    value: [\n                      \"physical_disability\",\n                      \"intellectual_or_learning_disability\",\n                      \"psychiatric_or_mental_illness\",\n                      \"other_disability\"] }}\n                  }\n        }\n        \"\"\",\n                tf.train.Example(),\n            ),\n        )\n\n    def _create_example_csv(self, use_fake_embedding=False):\n        header = [\n            \"comment_text\",\n            \"toxicity\",\n            \"heterosexual\",\n            \"homosexual_gay_or_lesbian\",\n            \"bisexual\",\n            \"other_sexual_orientation\",\n            \"male\",\n            \"female\",\n            \"transgender\",\n            \"other_gender\",\n            \"christian\",\n            \"jewish\",\n            \"muslim\",\n            \"hindu\",\n            \"buddhist\",\n            \"atheist\",\n            \"other_religion\",\n            \"black\",\n            \"white\",\n            \"asian\",\n            \"latino\",\n            \"other_race_or_ethnicity\",\n            \"physical_disability\",\n            \"intellectual_or_learning_disability\",\n            \"psychiatric_or_mental_illness\",\n            \"other_disability\",\n        ]\n        example = [\n            \"comment 1\" if not use_fake_embedding else 0.35,\n            0.1,\n            # sexual orientation\n            0.1,\n            0.1,\n            0.5,\n            0.1,\n            # gender\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            # religion\n            0.0,\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            0.5,\n            0.6,\n            # race or ethnicity\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            0.5,\n            # disability\n            0.6,\n            0.7,\n            0.8,\n            1.0,\n        ]\n        empty_comment_example = [\n            \"\" if not use_fake_embedding else 0.35,\n            0.1,\n            0.1,\n            0.1,\n            0.5,\n            0.1,\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            0.0,\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            0.5,\n            0.6,\n            0.1,\n            0.2,\n            0.3,\n            0.4,\n            0.5,\n            0.6,\n            0.7,\n            0.8,\n            1.0,\n        ]\n        return [header, example, empty_comment_example]\n\n    def _write_csv(self, examples):\n        filename = os.path.join(tempfile.mkdtemp(), \"input.csv\")\n        with open(filename, \"w\", newline=\"\") as csvfile:\n            csvwriter = csv.writer(csvfile, delimiter=\",\")\n            for example in examples:\n                csvwriter.writerow(example)\n\n        return filename\n\n    def test_convert_data_csv(self):\n        input_file = self._write_csv(self._create_example_csv())\n        output_file = util.convert_comments_data(input_file)\n\n        # Remove the quotes around identity terms list that read_csv injects.\n        df = pd.read_csv(output_file).replace(\"'\", \"\", regex=True)\n\n        expected_df = pd.DataFrame()\n        expected_df = pd.concat(\n            [\n                expected_df,\n                pd.DataFrame.from_dict(\n                    {\n                        \"comment_text\": [\"comment 1\"],\n                        \"toxicity\": [0.0],\n                        \"gender\": [[]],\n                        \"sexual_orientation\": [[\"bisexual\"]],\n                        \"race\": [[\"other_race_or_ethnicity\"]],\n                        \"religion\": [[\"atheist\", \"other_religion\"]],\n                        \"disability\": [\n                            [\n                                \"physical_disability\",\n                                \"intellectual_or_learning_disability\",\n                                \"psychiatric_or_mental_illness\",\n                                \"other_disability\",\n                            ]\n                        ],\n                    }\n                ),\n            ],\n            ignore_index=True,\n        )\n\n        self.assertEqual(\n            df.reset_index(drop=True, inplace=True),\n            expected_df.reset_index(drop=True, inplace=True),\n        )\n\n    # TODO(b/172260507): we should also look into testing the e2e call with tfma.\n    @mock.patch(\"tensorflow_model_analysis.default_eval_shared_model\", autospec=True)\n    @mock.patch(\"tensorflow_model_analysis.run_model_analysis\", autospec=True)\n    def test_get_eval_results_called_correclty(\n        self, mock_run_model_analysis, mock_shared_model\n    ):\n        mock_model = \"model\"\n        mock_shared_model.return_value = mock_model\n\n        model_location = \"saved_model\"\n        eval_results_path = \"eval_results\"\n        data_file = \"data\"\n        util.get_eval_results(model_location, eval_results_path, data_file)\n\n        mock_shared_model.assert_called_once_with(\n            eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING]\n        )\n\n        expected_eval_config = text_format.Parse(\n            \"\"\"\n     model_specs {\n       label_key: 'toxicity'\n     }\n     metrics_specs {\n       metrics {class_name: \"AUC\"}\n       metrics {class_name: \"ExampleCount\"}\n       metrics {class_name: \"Accuracy\"}\n       metrics {\n          class_name: \"FairnessIndicators\"\n          config: '{\"thresholds\": [0.4, 0.4125, 0.425, 0.4375, 0.45, 0.4675, 0.475, 0.4875, 0.5]}'\n       }\n     }\n     slicing_specs {\n       feature_keys: 'religion'\n     }\n     slicing_specs {}\n     options {\n         compute_confidence_intervals { value: true }\n         disabled_outputs{values: \"analysis\"}\n     }\n     \"\"\",\n            tfma.EvalConfig(),\n        )\n\n        mock_run_model_analysis.assert_called_once_with(\n            eval_shared_model=mock_model,\n            data_location=data_file,\n            file_format=\"tfrecords\",\n            eval_config=expected_eval_config,\n            output_path=eval_results_path,\n            extractors=None,\n        )\n"
  },
  {
    "path": "fairness_indicators/version.py",
    "content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contains the version string of Fairness Indicators.\"\"\"\n\n# Note that setup.py uses this version.\n__version__ = \"0.49.0.dev\"\n"
  },
  {
    "path": "mkdocs.yml",
    "content": "site_name: Fairness Indicators\nrepo_name: \"fairness-indicators\"\nrepo_url: https://github.com/tensorflow/fairness-indicators\n\ntheme:\n  name: material\n  logo: images/tf_full_color_primary_icon.svg\n  palette:\n    # Palette toggle for automatic mode\n    - media: \"(prefers-color-scheme)\"\n      primary: custom\n      accent: custom\n      toggle:\n        icon: material/brightness-auto\n        name: Switch to light mode\n\n    # Palette toggle for light mode\n    - media: \"(prefers-color-scheme: light)\"\n      primary: custom\n      accent: custom\n      scheme: default\n      toggle:\n        icon: material/brightness-7\n        name: Switch to dark mode\n\n    # Palette toggle for dark mode\n    - media: \"(prefers-color-scheme: dark)\"\n      primary: custom\n      accent: custom\n      scheme: slate\n      toggle:\n        icon: material/brightness-4\n        name: Switch to system preference\n  favicon: images/tf_full_color_primary_icon.svg\n\n  features:\n    - content.code.copy\n    - content.code.select\n    - content.action.edit\n\nextra_css:\n  - stylesheets/extra.css\n\nextra_javascript:\n  - javascripts/mathjax.js\n  - https://unpkg.com/mathjax@3/es5/tex-mml-chtml.js\n\nplugins:\n  - mkdocs-jupyter:\n      execute: false\n\nmarkdown_extensions:\n  - admonition\n  - attr_list\n  - def_list\n  - tables\n  - toc:\n      permalink: true\n  - pymdownx.highlight:\n      anchor_linenums: true\n      linenums: false\n      line_spans: __span\n      pygments_lang_class: true\n  - pymdownx.inlinehilite\n  - pymdownx.snippets\n  - pymdownx.superfences\n  - pymdownx.arithmatex:\n      generic: true\n  - pymdownx.critic\n  - pymdownx.caret\n  - pymdownx.keys\n  - pymdownx.mark\n  - pymdownx.tilde\n  - pymdownx.blocks.html\n  - md_in_html\n  - pymdownx.emoji:\n      emoji_index: !!python/name:material.extensions.emoji.twemoji\n      emoji_generator: !!python/name:material.extensions.emoji.to_svg\n\nnav:\n  - \"Overview\": index.md\n  - \"Thinking about Fairness Evaluation\": guide/guidance.md\n  - \"Introduction to Fairness Indicators\": tutorials/Fairness_Indicators_Example_Colab.ipynb\n  - \"Evaluate fairness using TF-Hub models\": tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb\n  - \"Visualize with Tensor Board Plugin\": tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\n  - \"Evaluate toxicity in Wiki comments\": tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\n  - \"Tensor Flow constrained optimization example\": tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\n  - \"Pandas Data Frame case study\": tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\n  - \"Face SSD example Colab\": tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\n"
  },
  {
    "path": "pyproject.toml",
    "content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#      http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n[build-system]\nrequires = [\n  \"setuptools\",\n  \"wheel\",\n]\n\n[tool.ruff]\nline-length = 88\n\n[tool.ruff.lint]\nselect = [\n    # pycodestyle\n    \"E\",\n    \"W\",\n    # Pyflakes\n    \"F\",\n    # pyupgrade\n    \"UP\",\n    # flake8-bugbear\n    \"B\",\n    # flake8-simplify\n    \"SIM\",\n    # isort\n    \"I\",\n    # pep8 naming\n    \"N\",\n    # pydocstyle\n    \"D\",\n    # annotations\n    \"ANN\",\n    # debugger\n    \"T10\",\n    # flake8-pytest\n    \"PT\",\n    # flake8-return\n    \"RET\",\n    # flake8-unused-arguments\n    \"ARG\",\n    # flake8-fixme\n    \"FIX\",\n    # flake8-eradicate\n    \"ERA\",\n    # pandas-vet\n    \"PD\",\n    # numpy-specific rules\n    \"NPY\",\n]\n\nignore = [\n    \"D104\",   # Missing docstring in public package\n    \"D100\",   # Missing docstring in public module\n    \"D211\",   # No blank line before class\n    \"PD901\",  # Avoid using 'df' for pandas dataframes. Perfectly fine in functions with limited scope\n    \"ANN201\", # Missing return type annotation for public function (makes no sense for NoneType return types...)\n    \"ANN101\", # Missing type annotation for `self`\n    \"ANN204\", # Missing return type annotation for special method\n    \"ANN002\", # Missing type annotation for `*args`\n    \"ANN003\", # Missing type annotation for `**kwargs`\n    \"D105\",   # Missing docstring in magic method\n    \"D203\",   # 1 blank line before after class docstring\n    \"D204\",   # 1 blank line required after class docstring\n    \"D413\",   # 1 blank line after parameters\n    \"SIM108\", # Simplify if/else to one line; not always clearer\n    \"D206\",   # Docstrings should be indented with spaces; unnecessary when running ruff-format\n    \"E501\",   # Line length too long; unnecessary when running ruff-format\n    \"W191\",   # Indentation contains tabs; unnecessary when running ruff-format\n\n    # REMOVE THESE AS FIXED\n    \"ANN001\", # Missing type annotation for function argument\n    \"ANN202\", # Missing return type annotation for private function\n    \"ANN401\", # Dynamically typed expressions (typing.Any) are disallowed\n    \"ARG001\", # Unused function argument\n    \"ARG002\", # Unused method argument\n    \"B018\",   # Found useless expression\n    \"D101\",   # Missing docstring in public class\n    \"D102\",   # Missing docstring in public method\n    \"D103\",   # Missing docstring in public function\n    \"D107\",   # Missing docstring in `__init__`\n    \"D401\",   # First line of docstring should be in imperative mood\n    \"ERA001\", # Found commented-out code\n    \"FIX002\", # Line contains TODO\n    \"N802\",   # Function name should be lowercase\n    \"PD002\",  # `inplace=True` should be avoided\n    \"PD004\",  # `.notna` is preferred to `.notnull`\n    \"PT009\",  # Use a regular `assert` instead of unittest-style\n    \"PT027\",  # Use `pytest.raises` instead of unittest-style `assertRaises`\n    \"RET505\", # Unnecessary `elif` after `return` statement\n    \"RET506\", # Unnecessary `else` after `raise` statement\n    \"SIM105\", # Use `contextlib.suppress` instead of `try`-`except`-`pass`\n    \"UP008\",  # Use `super()` instead of `super(__class__, self)`\n    \"UP031\",  # Use format specifiers instead of percent format\n]\n\n\n[tool.ruff.lint.per-file-ignores]\n\"__init__.py\" = [\"F401\"]\n\n[tool.pytest.ini_options]\naddopts = \"--import-mode=importlib\"\ntestpaths = [\"fairness_indicators\"]\npython_files = [\"*_test.py\"]\n"
  },
  {
    "path": "requirements-docs.txt",
    "content": "mkdocs\nmkdocs-material\nmkdocs-jupyter\n"
  },
  {
    "path": "setup.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup to install Fairness Indicators.\"\"\"\n\nimport os\nimport sys\nfrom pathlib import Path\n\nimport setuptools\n\nif sys.version_info >= (3, 11):\n    sys.exit(\"Sorry, Python >= 3.11 is not supported\")\n\n\ndef select_constraint(default, nightly=None, git_master=None):\n    \"\"\"Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var.\"\"\"\n    selector = os.environ.get(\"TFX_DEPENDENCY_SELECTOR\")\n    if selector == \"UNCONSTRAINED\":\n        return \"\"\n    elif selector == \"NIGHTLY\" and nightly is not None:\n        return nightly\n    elif selector == \"GIT_MASTER\" and git_master is not None:\n        return git_master\n    else:\n        return default\n\n\nREQUIRED_PACKAGES = [\n    \"tensorflow>=2.17,<2.18\",\n    \"tensorflow-hub>=0.16.1,<1.0.0\",\n    \"tensorflow-data-validation>=1.17.0,<2.0.0\",\n    \"tensorflow-model-analysis>=0.48.0,<0.49.0\",\n    \"witwidget>=1.4.4,<2\",\n    \"protobuf>=4.21.6,<6.0.0\",\n]\n\nTEST_PACKAGES = [\n    \"pytest>=8.3.0,<9\",\n]\n\nwith open(Path(\"./requirements-docs.txt\").expanduser().absolute()) as f:\n    DOCS_PACKAGES = [req.strip() for req in f.readlines()]\n\n# Get version from version module.\nwith open(\"fairness_indicators/version.py\") as fp:\n    globals_dict = {}\n    exec(fp.read(), globals_dict)  # pylint: disable=exec-used\n__version__ = globals_dict[\"__version__\"]\nwith open(\"README.md\", encoding=\"utf-8\") as fh:\n    long_description = fh.read()\nsetuptools.setup(\n    name=\"fairness_indicators\",\n    version=__version__,\n    description=\"Fairness Indicators\",\n    long_description=long_description,\n    long_description_content_type=\"text/markdown\",\n    url=\"https://github.com/tensorflow/fairness-indicators\",\n    author=\"Google LLC\",\n    author_email=\"packages@tensorflow.org\",\n    packages=setuptools.find_packages(exclude=[\"tensorboard_plugin\"]),\n    package_data={\n        \"fairness_indicators\": [\"documentation/*\"],\n    },\n    python_requires=\">=3.9,<4\",\n    install_requires=REQUIRED_PACKAGES,\n    tests_require=REQUIRED_PACKAGES,\n    extras_require={\"docs\": DOCS_PACKAGES, \"test\": TEST_PACKAGES, \"dev\": \"pre-commit\"},\n    # PyPI package information.\n    classifiers=[\n        \"Development Status :: 4 - Beta\",\n        \"Intended Audience :: Developers\",\n        \"Intended Audience :: Education\",\n        \"Intended Audience :: Science/Research\",\n        \"License :: OSI Approved :: Apache Software License\",\n        \"Operating System :: OS Independent\",\n        \"Programming Language :: Python :: 3\",\n        \"Programming Language :: Python :: 3.9\",\n        \"Programming Language :: Python :: 3 :: Only\",\n        \"Topic :: Scientific/Engineering\",\n        \"Topic :: Scientific/Engineering :: Mathematics\",\n        \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n        \"Topic :: Software Development\",\n        \"Topic :: Software Development :: Libraries\",\n        \"Topic :: Software Development :: Libraries :: Python Modules\",\n    ],\n    license=\"Apache 2.0\",\n    keywords=(\n        \"tensorflow model analysis fairness indicators tensorboard machine\" \" learning\"\n    ),\n)\n"
  },
  {
    "path": "tensorboard_plugin/README.md",
    "content": "# Evaluating Models with the Fairness Indicators Dashboard [Beta]\n\n![Fairness Indicators](https://raw.githubusercontent.com/tensorflow/tensorboard/master/docs/images/fairness-indicators.png)\n\nFairness Indicators for TensorBoard enables easy computation of\ncommonly-identified fairness metrics for _binary_ and _multiclass_ classifiers.\nWith the plugin, you can visualize fairness evaluations for your runs and easily\ncompare performance across groups.\n\nIn particular, Fairness Indicators for TensorBoard allows you to evaluate and\nvisualize model performance, sliced across defined groups of users. Feel\nconfident about your results with confidence intervals and evaluations at\nmultiple thresholds.\n\nMany existing tools for evaluating fairness concerns don’t work well on large\nscale datasets and models. At Google, it is important for us to have tools that\ncan work on billion-user systems. Fairness Indicators will allow you to evaluate\nacross any size of use case, in the TensorBoard environment or in\n[Colab](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/).\n\n## Requirements\n\nTo install Fairness Indicators for TensorBoard, run:\n\n```\npython3 -m virtualenv ~/tensorboard_demo\nsource ~/tensorboard_demo/bin/activate\npip install --upgrade pip\npip install fairness_indicators\npip install tensorboard-plugin-fairness-indicators\n```\n### Nightly Packages\n\nTensorboard Plugin also hosts nightly packages at\nhttps://pypi-nightly.tensorflow.org on Google Cloud. To install the latest\nnightly package, please use the following command:\n\n```bash\npip install --extra-index-url https://pypi-nightly.tensorflow.org/simple tensorboard-plugin-fairness-indicators\n```\n\nThis will install the nightly packages for the major dependencies of Tensorboard\nPlugin such as TensorFlow Model Analysis (TFMA).\n\n## Demo Colab\n\n[Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb)\ncontains an end-to-end demo to train and evaluate a model and visualize fairness evaluation\nresults in TensorBoard.\n\n## Usage\n\nTo use the Fairness Indicators with your own data and evaluations:\n\n1.  Train a new model and evaluate using\n    `tensorflow_model_analysis.run_model_analysis` or\n    `tensorflow_model_analysis.ExtractEvaluateAndWriteResult` API in\n    [model_eval_lib](https://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/api/model_eval_lib.py).\n    For code snippets on how to do this, see the Fairness Indicators colab\n    [here](https://github.com/tensorflow/fairness-indicators).\n\n2.  Write a summary data file using [`demo.py`](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py), which will be read\n    by TensorBoard to render the Fairness Indicators dashboard (See the\n    [TensorBoard tutorial](https://github.com/tensorflow/tensorboard/blob/master/README.md)\n    for more information on summary data files).\n\n    Flags to be used with the `demo.py` utility:\n\n    -   `--logdir`: Directory where TensorBoard will write the summary\n    -   `--eval_result_output_dir`: Directory containing evaluation results\n        evaluated by TFMA\n\n    ```\n    python demo.py --logdir=<logdir> --eval_result_output_dir=<eval_result_dir>`\n    ```\n\n    Or you can also use `tensorboard_plugin_fairness_indicators.summary_v2` API to write the summary file.\n\n    ```\n    writer = tf.summary.create_file_writer(<logdir>)\n    with writer.as_default():\n        summary_v2.FairnessIndicators(<eval_result_dir>, step=1)\n    writer.close()\n    ```\n\n3.  Run TensorBoard\n\n    Note: This will start a local instance. After the local instance is started, a link\n    will be displayed to the terminal. Open the link in your browser to view the\n    Fairness Indicators dashboard.\n\n    -   `tensorboard --logdir=<logdir>`\n    -   Select the new evaluation run using the drop-down on the left side of\n        the dashboard to visualize results.\n\n## Compatible versions\n\nThe following table shows the  package versions that are\ncompatible with each other. This is determined by our testing framework, but\nother *untested* combinations may also work.\n\n|tensorboard-pluginn                                                                                          | tensorflow    | tensorflow-model-analysis |\n|-------------------------------------------------------------------------------------------------------------|---------------|---------------------------|\n|[GitHub master](https://github.com/tensorflow/fairness-indicators/blob/master/tensorboard_plugin/README.md)  | nightly (2.x) | 0.48.0                    |\n|[v0.48.0](https://github.com/tensorflow/fairness-indicators/blob/v0.48.0/tensorboard_plugin/README.md)       | 2.17.1        | 0.48.0                    |\n|[v0.47.0](https://github.com/tensorflow/fairness-indicators/blob/v0.47.0/tensorboard_plugin/README.md)       | 2.16.2        | 0.47.1                    |\n|[v0.46.0](https://github.com/tensorflow/fairness-indicators/blob/v0.46.0/tensorboard_plugin/README.md)       | 2.15.0        | 0.46.0                    |\n|[v0.44.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/tensorboard_plugin/README.md)       | 2.12.0        | 0.44.0                    |\n|[v0.43.0](https://github.com/tensorflow/fairness-indicators/blob/v0.43.0/tensorboard_plugin/README.md)       | 2.11.0        | 0.43.0                    |\n|[v0.42.0](https://github.com/tensorflow/fairness-indicators/blob/v0.42.0/tensorboard_plugin/README.md)       | 2.10.0        | 0.42.0                    |\n|[v0.41.0](https://github.com/tensorflow/fairness-indicators/blob/v0.41.0/tensorboard_plugin/README.md)       | 2.9.0         | 0.41.0                    |\n|[v0.40.0](https://github.com/tensorflow/fairness-indicators/blob/v0.40.0/tensorboard_plugin/README.md)       | 2.9.0         | 0.40.0                    |\n|[v0.39.0](https://github.com/tensorflow/fairness-indicators/blob/v0.39.0/tensorboard_plugin/README.md)       | 2.8.0         | 0.39.0                    |\n|[v0.38.0](https://github.com/tensorflow/fairness-indicators/blob/v0.38.0/tensorboard_plugin/README.md)       | 2.8.0         | 0.38.0                    |\n|[v0.37.0](https://github.com/tensorflow/fairness-indicators/blob/v0.37.0/tensorboard_plugin/README.md)       | 2.7.0         | 0.37.0                    |\n|[v0.36.0](https://github.com/tensorflow/fairness-indicators/blob/v0.36.0/tensorboard_plugin/README.md)       | 2.7.0         | 0.36.0                    |\n|[v0.35.0](https://github.com/tensorflow/fairness-indicators/blob/v0.35.0/tensorboard_plugin/README.md)       | 2.6.0         | 0.35.0                    |\n|[v0.34.0](https://github.com/tensorflow/fairness-indicators/blob/v0.34.0/tensorboard_plugin/README.md)       | 2.6.0         | 0.34.0                    |\n|[v0.33.0](https://github.com/tensorflow/fairness-indicators/blob/v0.33.0/tensorboard_plugin/README.md)       | 2.5.0         | 0.33.0                    |\n|[v0.30.0](https://github.com/tensorflow/fairness-indicators/blob/v0.30.0/tensorboard_plugin/README.md)       | 2.4.0         | 0.30.0                    |\n|[v0.29.0](https://github.com/tensorflow/fairness-indicators/blob/v0.29.0/tensorboard_plugin/README.md)       | 2.4.0         | 0.29.0                    |\n|[v0.28.0](https://github.com/tensorflow/fairness-indicators/blob/v0.28.0/tensorboard_plugin/README.md)       | 2.4.0         | 0.28.0                    |\n|[v0.27.0](https://github.com/tensorflow/fairness-indicators/blob/v0.27.0/tensorboard_plugin/README.md)       | 2.4.0         | 0.27.0                    |\n|[v0.26.0](https://github.com/tensorflow/fairness-indicators/blob/v0.26.0/tensorboard_plugin/README.md)       | 2.3.0         | 0.26.0                    |\n|[v0.25.0](https://github.com/tensorflow/fairness-indicators/blob/v0.25.0/tensorboard_plugin/README.md)       | 2.3.0         | 0.25.0                    |\n|[v0.24.0](https://github.com/tensorflow/fairness-indicators/blob/v0.24.0/tensorboard_plugin/README.md)       | 2.3.0         | 0.24.0                    |\n|[v0.23.0](https://github.com/tensorflow/fairness-indicators/blob/v0.23.0/tensorboard_plugin/README.md)       | 2.3.0         | 0.23.0                    |\n"
  },
  {
    "path": "tensorboard_plugin/pytest.ini",
    "content": "[pytest]\naddopts = \"--import-mode=importlib\"\ntestpaths = \"tensorboard_plugin_fairness_indicators\"\npython_files = \"*_test.py\"\n"
  },
  {
    "path": "tensorboard_plugin/setup.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup to install Fairness Indicators Tensorboard plugin.\"\"\"\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nif sys.version_info >= (3, 11):\n    sys.exit(\"Sorry, Python >= 3.11 is not supported\")\n\n\ndef select_constraint(default, nightly=None, git_master=None):\n    \"\"\"Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var.\"\"\"\n    selector = os.environ.get(\"TFX_DEPENDENCY_SELECTOR\")\n    if selector == \"UNCONSTRAINED\":\n        return \"\"\n    elif selector == \"NIGHTLY\" and nightly is not None:\n        return nightly\n    elif selector == \"GIT_MASTER\" and git_master is not None:\n        return git_master\n    else:\n        return default\n\n\nREQUIRED_PACKAGES = [\n    \"protobuf>=4.21.6,<6.0.0\",\n    \"tensorboard>=2.17.0,<2.18.0\",\n    \"tensorflow>=2.17,<2.18\",\n    \"tf-keras>=2.17,<2.18\",\n    \"tensorflow-model-analysis>=0.48,<0.49\",\n    \"werkzeug<2\",\n]\n\nTEST_PACKAGES = [\n    \"pytest>=8.3.0,<9\",\n]\n\nwith open(\"README.md\", encoding=\"utf-8\") as fh:\n    long_description = fh.read()\n\n# Get version from version module.\nwith open(\"tensorboard_plugin_fairness_indicators/version.py\") as fp:\n    globals_dict = {}\n    exec(fp.read(), globals_dict)  # pylint: disable=exec-used\n__version__ = globals_dict[\"__version__\"]\n\nsetup(\n    name=\"tensorboard_plugin_fairness_indicators\",\n    version=__version__,\n    description=\"Fairness Indicators TensorBoard Plugin\",\n    long_description=long_description,\n    long_description_content_type=\"text/markdown\",\n    url=\"https://github.com/tensorflow/fairness-indicators\",\n    author=\"Google LLC\",\n    author_email=\"packages@tensorflow.org\",\n    packages=find_packages(),\n    package_data={\n        \"tensorboard_plugin_fairness_indicators\": [\"static/**\"],\n    },\n    entry_points={\n        \"tensorboard_plugins\": [\n            \"fairness_indicators = tensorboard_plugin_fairness_indicators.plugin:FairnessIndicatorsPlugin\",\n        ],\n    },\n    python_requires=\">=3.9,<4\",\n    install_requires=REQUIRED_PACKAGES,\n    tests_require=REQUIRED_PACKAGES,\n    extras_require={\n        \"test\": TEST_PACKAGES,\n    },\n    classifiers=[\n        \"Development Status :: 4 - Beta\",\n        \"Intended Audience :: Developers\",\n        \"Intended Audience :: Education\",\n        \"Intended Audience :: Science/Research\",\n        \"License :: OSI Approved :: Apache Software License\",\n        \"Operating System :: OS Independent\",\n        \"Programming Language :: Python :: 3\",\n        \"Programming Language :: Python :: 3.9\",\n        \"Programming Language :: Python :: 3 :: Only\",\n        \"Topic :: Scientific/Engineering\",\n        \"Topic :: Scientific/Engineering :: Mathematics\",\n        \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n        \"Topic :: Software Development\",\n        \"Topic :: Software Development :: Libraries\",\n        \"Topic :: Software Development :: Libraries :: Python Modules\",\n    ],\n    license=\"Apache 2.0\",\n    keywords=\"tensorflow model analysis fairness indicators tensorboard machine learning\",\n)\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/RELEASE.md",
    "content": "<!-- mdlint off(HEADERS_TOO_MANY_H1) -->\n\n# Current Version (Still in Development)\n\n## Major Features and Improvements\n\n## Bug Fixes and Other Changes\n\n## Breaking Changes\n\n## Deprecations\n\n# Version 0.48.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Support `tensorflow>=2.17,<2.18`.\n*   Depends on `tensorflow-model-analysis>=0.48,<0.49`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.47.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Support `tensorflow>=2.16,<2.17`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.46.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   N/A\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*  Deprecated python 3.8 support\n\n# Version 0.44.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*  Depends on `tensorflow>=2.12,<2.13`.\n*   Depends on tensorflow-model-analysis>=0.44,<0.45.\n*  Depends on `protobuf>=3.20.3,<5`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   Deprecating python3.7 support.\n\n# Version 0.43.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on tensorflow>=2.11,<3.\n*   Depends on tensorflow-model-analysis>=0.43,<0.44.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.42.0\n\n## Major Features and Improvements\n\n*   This is the last version that supports TensorFlow 1.15.x. TF 1.15.x support\n    will be removed in the next version. Please check the\n    [TF2 migration guide](https://www.tensorflow.org/guide/migrate) to migrate\n    to TF2.\n\n## Bug Fixes and Other Changes\n\n*   Depends on tensorflow>=2.10.0,<3.\n*   Depends on tensorflow-model-analysis>=0.42,<0.43.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.41.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on tensorflow>=2.9.0,<3.\n*   Depends on tensorflow-model-analysis>=0.41,<0.42.\n\n## Breaking Changes\n\n*   N/A\n## Deprecations\n\n*   N/A\n\n# Version 0.40.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on tensorflow>=2.8.0,<3.\n*   Depends on tensorflow-model-analysis>=0.40,<0.41.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.39.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `Werkzeug<2`.\n*   Depends on `tensorflow>=2.8.0,<3`.\n*   Depends on `tensorboard>=2.8.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.38,<0.39`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.38.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=2.8.0,<3`.\n*   Depends on `tensorboard>=2.8.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.38,<0.39`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.37.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   N/A\n\n## Breaking Changes\n\n*   Depends on `tensorflow-model-analysis>=0.37,<0.38`.\n\n## Deprecations\n\n*   N/A\n\n# Version 0.36.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow>=2.7.0,<3`.\n*   Depends on `tensorboard>=2.7.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.36,<0.37`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.35.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   N/A\n\n## Breaking Changes\n\n*   Depends on `tensorflow-model-analysis>=0.35,<0.36`.\n\n## Deprecations\n\n*   Deprecating python3.6 support.\n\n# Version 0.34.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorboard>=2.5.0,<3`.\n*   Depends on `tensorflow>=2.6.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.34,<0.35`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.33.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorboard>=2.5.0,<3`.\n*   Depends on `tensorflow>=2.5.0,<3`.\n*   Depends on `protobuf>=3.13,<4`.\n*   Depends on `tensorflow-model-analysis>=0.33,<0.34`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.30.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorboard>=2.4.0,!=2.5.*,<3`.\n*   Depends on `tensorflow>=2.4.0,!=2.5.*,<3`.\n*   Depends on `tensorflow-model-analysis>=0.30,<0.31`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.29.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-model-analysis>=0.29,<0.30`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.28.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug Fixes and Other Changes\n\n*   Depends on `tensorflow-model-analysis>=0.28,<0.29`.\n\n## Breaking Changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.27.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug fixes and other changes\n\n*   Depends on `tensorboard>=2.4.0,<3`.\n*   Depends on `tensorflow>=2.4.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.27,<0.28`.\n\n## Breaking changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.26.0\n\n## Major Features and Improvements\n\n*   N/A\n\n## Bug fixes and other changes\n\n*   Depends on `tensorboard>=2.3.0,!=2.4.*,<3`.\n*   Depends on `tensorflow>=2.3.0,!=2.4.*,<3`.\n*   Depends on `tensorflow-model-analysis>=0.26,<0.27`.\n\n## Breaking changes\n\n*   N/A\n\n## Deprecations\n\n*   N/A\n\n# Version 0.25.0\n\n## Major Features and Improvements\n\n*   From this release Tensorboard Plugin will also be hosting nightly packages\n    on https://pypi-nightly.tensorflow.org. To install the nightly package use\n    the following command:\n\n    ```\n    pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple tensorboard-plugin-fairness-indicators\n    ```\n\n    Note: These nightly packages are unstable and breakages are likely to\n    happen. The fix could often take a week or more depending on the complexity\n    involved for the wheels to be available on the PyPI cloud service. You can\n    always use the stable version of Tensorboard Plugin available on PyPI by\n    running the command `pip install tensorboard-plugin-fairness-indicators` .\n\n## Bug fixes and other changes\n\n*   Adding support for model comparison using dynamic URL in TensorBoard plugin.\n*   Depends on `tensorflow-model-analysis>=0.25,<0.26`.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n* N/A\n\n# Version 0.24.0\n\n## Major Features and Improvements\n\n* N/A\n\n## Bug fixes and other changes\n\n*   Fix in the error message while rendering evaluation results in\n    TensorBoard plugin from evaluation output path provided in the URL.\n*   Depends on `tensorflow-model-analysis>=0.24,<0.25`.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n*   Deprecating Py3.5 support.\n\n# Version 0.23.0\n\n## Major Features and Improvements\n\n* N/A\n\n## Bug fixes and other changes\n\n*   Depends on `tensorboard>=2.3.0,<3`.\n*   Depends on `tensorflow>=2.3.0,<3`.\n*   Depends on `tensorflow-model-analysis>=0.23,<0.24`.\n*   Adding model comparison support in TensorBoard Plugin.\n\n## Breaking changes\n\n* N/A\n\n## Deprecations\n\n*   Deprecating Py2 support.\n*   Note: We plan to drop py3.5 support in the next release.\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/__init__.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Fairness Indicators Plugin Demo.\"\"\"\n\nimport tensorflow.compat.v1 as tf\nimport tensorflow.compat.v2 as tf2\nfrom absl import app, flags\n\nfrom tensorboard_plugin_fairness_indicators import summary_v2\n\ntf.enable_eager_execution()\ntf = tf2\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n    \"eval_result_output_dir\", \"\", \"Log dir containing evaluation results.\"\n)\n\nflags.DEFINE_string(\"logdir\", \"\", \"Log dir where demo logs will be written.\")\n\n\ndef main(unused_argv):\n    writer = tf.summary.create_file_writer(FLAGS.logdir)\n\n    with writer.as_default():\n        summary_v2.FairnessIndicators(FLAGS.eval_result_output_dir, step=1)\n    writer.close()\n\n\nif __name__ == \"__main__\":\n    app.run(main)\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Plugin-specific global metadata.\"\"\"\n\nfrom tensorboard.compat.proto import summary_pb2\n\nPLUGIN_NAME = \"fairness_indicators\"\n\n\ndef CreateSummaryMetadata(description=None):\n    return summary_pb2.SummaryMetadata(\n        summary_description=description,\n        plugin_data=summary_pb2.SummaryMetadata.PluginData(plugin_name=PLUGIN_NAME),\n    )\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata_test.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests for util function to create plugin metadata.\"\"\"\n\nimport tensorflow.compat.v1 as tf\n\nfrom tensorboard_plugin_fairness_indicators import metadata\n\n\nclass MetadataTest(tf.test.TestCase):\n    def testCreateSummaryMetadata(self):\n        summary_metadata = metadata.CreateSummaryMetadata(\"description\")\n        self.assertEqual(metadata.PLUGIN_NAME, summary_metadata.plugin_data.plugin_name)\n        self.assertEqual(\"description\", summary_metadata.summary_description)\n\n    def testCreateSummaryMetadata_withoutDescription(self):\n        summary_metadata = metadata.CreateSummaryMetadata()\n        self.assertEqual(metadata.PLUGIN_NAME, summary_metadata.plugin_data.plugin_name)\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorBoard Fairnss Indicators plugin.\"\"\"\n\nimport os\nfrom typing import Any, Union\n\nimport six\nimport tensorflow as tf\nimport tensorflow_model_analysis as tfma\nfrom absl import logging\nfrom google.protobuf import json_format\nfrom tensorboard.backend import http_util\nfrom tensorboard.plugins import base_plugin\nfrom werkzeug import wrappers\n\nfrom tensorboard_plugin_fairness_indicators import metadata\n\n_TEMPLATE_LOCATION = os.path.normpath(\n    os.path.join(\n        __file__, \"../../\" \"tensorflow_model_analysis/static/vulcanized_tfma.js\"\n    )\n)\n\n\ndef stringify_slice_key_value(\n    slice_key: tfma.slicer.slicer_lib.SliceKeyType,\n) -> str:\n    \"\"\"Stringifies a slice key value.\n\n    The string representation of a SingletonSliceKeyType is \"feature:value\". This\n    function returns value.\n\n    When\n    multiple columns / features are specified, the string representation of a\n    SliceKeyType's value is \"v1_X_v2_X_...\" where v1, v2, ... are values. For\n    example,\n    ('gender, 'f'), ('age', 5) becomes f_X_5. If no columns / feature\n    specified, return \"Overall\".\n\n    Note that we do not perform special escaping for slice values that contain\n    '_X_'. This stringified representation is meant to be human-readbale rather\n    than a reversible encoding.\n\n    The columns will be in the same order as in SliceKeyType. If they are\n    generated using SingleSliceSpec.generate_slices, they will be in sorted order,\n    ascending.\n\n    Technically float values are not supported, but we don't check for them here.\n\n    Args:\n    ----\n      slice_key: Slice key to stringify. The constituent SingletonSliceKeyTypes\n        should be sorted in ascending order.\n\n    Returns:\n    -------\n      String representation of the slice key's value.\n    \"\"\"\n    if not slice_key:\n        return \"Overall\"\n\n    # Since this is meant to be a human-readable string, we assume that the\n    # feature values are valid UTF-8 strings (might not be true in cases where\n    # people store serialised protos in the features for instance).\n    # We need to call as_str_any to convert non-string (e.g. integer) values to\n    # string first before converting to text.\n    # We use u'{}' instead of '{}' here to avoid encoding a unicode character with\n    # ascii codec.\n    values = [\n        f\"{tf.compat.as_text(tf.compat.as_str_any(value))}\" for _, value in slice_key\n    ]\n    return \"_X_\".join(values)\n\n\ndef _add_cross_slice_key_data(\n    slice_key: tfma.slicer.slicer_lib.CrossSliceKeyType,\n    metrics: tfma.view.view_types.MetricsByTextKey,\n    data: list[Any],\n):\n    \"\"\"Adds data for cross slice key.\n\n    Baseline and comparison slice keys are joined by '__XX__'.\n\n    Args:\n    ----\n      slice_key: Cross slice key.\n      metrics: Metrics data for the cross slice key.\n      data: List where UI data is to be appended.\n    \"\"\"\n    baseline_key = slice_key[0]\n    comparison_key = slice_key[1]\n    stringify_slice_value = (\n        stringify_slice_key_value(baseline_key)\n        + \"__XX__\"\n        + stringify_slice_key_value(comparison_key)\n    )\n    stringify_slice = (\n        tfma.slicer.slicer_lib.stringify_slice_key(baseline_key)\n        + \"__XX__\"\n        + tfma.slicer.slicer_lib.stringify_slice_key(comparison_key)\n    )\n    data.append(\n        {\n            \"sliceValue\": stringify_slice_value,\n            \"slice\": stringify_slice,\n            \"metrics\": metrics,\n        }\n    )\n\n\ndef convert_slicing_metrics_to_ui_input(\n    slicing_metrics: list[\n        tuple[\n            tfma.slicer.slicer_lib.SliceKeyOrCrossSliceKeyType,\n            tfma.view.view_types.MetricsByOutputName,\n        ]\n    ],\n    slicing_column: Union[str, None] = None,\n    slicing_spec: Union[tfma.slicer.slicer_lib.SingleSliceSpec, None] = None,\n    output_name: str = \"\",\n    multi_class_key: str = \"\",\n) -> Union[list[dict[str, Any]], None]:\n    \"\"\"Renders the Fairness Indicator view.\n\n    Args:\n    ----\n      slicing_metrics: tfma.EvalResult.slicing_metrics.\n      slicing_column: The slicing column to to filter results. If both\n        slicing_column and slicing_spec are None, show all eval results.\n      slicing_spec: The slicing spec to filter results. If both slicing_column and\n        slicing_spec are None, show all eval results.\n      output_name: The output name associated with metric (for multi-output\n        models).\n      multi_class_key: The multi-class key associated with metric (for multi-class\n        models).\n\n    Returns:\n    -------\n      A list of dicts for each slice, where each dict contains keys 'sliceValue',\n      'slice', and 'metrics'.\n\n    Raises:\n    ------\n      ValueError if no related eval result found or both slicing_column and\n      slicing_spec are not None.\n    \"\"\"\n    if slicing_column and slicing_spec:\n        raise ValueError(\n            'Only one of the \"slicing_column\" and \"slicing_spec\" parameters '\n            \"can be set.\"\n        )\n    if slicing_column:\n        slicing_spec = tfma.slicer.slicer_lib.SingleSliceSpec(columns=[slicing_column])\n\n    data = []\n    for slice_key, metric_value in slicing_metrics:\n        if (\n            metric_value is not None\n            and output_name in metric_value\n            and multi_class_key in metric_value[output_name]\n        ):\n            metrics = metric_value[output_name][multi_class_key]\n            # To add evaluation data for cross slice comparison.\n            if tfma.slicer.slicer_lib.is_cross_slice_key(slice_key):\n                _add_cross_slice_key_data(slice_key, metrics, data)\n            # To add evaluation data for regular slices.\n            elif (\n                slicing_spec is None\n                or not slice_key\n                or slicing_spec.is_slice_applicable(slice_key)\n            ):\n                data.append(\n                    {\n                        \"sliceValue\": stringify_slice_key_value(slice_key),\n                        \"slice\": tfma.slicer.slicer_lib.stringify_slice_key(slice_key),\n                        \"metrics\": metrics,\n                    }\n                )\n    if not data:\n        raise ValueError(\n            'No eval result found for output_name:\"%s\" and '\n            'multi_class_key:\"%s\" and slicing_column:\"%s\" and slicing_spec:\"%s\".'\n            % (output_name, multi_class_key, slicing_column, slicing_spec)\n        )\n    return data\n\n\nclass FairnessIndicatorsPlugin(base_plugin.TBPlugin):\n    \"\"\"A plugin to visualize Fairness Indicators.\"\"\"\n\n    plugin_name = metadata.PLUGIN_NAME\n\n    def __init__(self, context):\n        \"\"\"Instantiates plugin via TensorBoard core.\n\n        Args:\n        ----\n          context: A base_plugin.TBContext instance. A magic container that\n            TensorBoard uses to make objects available to the plugin.\n        \"\"\"\n        self._multiplexer = context.multiplexer\n\n    def get_plugin_apps(self):\n        \"\"\"Gets all routes offered by the plugin.\n\n        This method is called by TensorBoard when retrieving all the\n        routes offered by the plugin.\n\n        Returns\n        -------\n          A dictionary mapping URL path to route that handles it.\n        \"\"\"\n        return {\n            \"/get_evaluation_result\": self._get_evaluation_result,\n            \"/get_evaluation_result_from_remote_path\": self._get_evaluation_result_from_remote_path,\n            \"/index.js\": self._serve_js,\n            \"/vulcanized_tfma.js\": self._serve_vulcanized_js,\n        }\n\n    def frontend_metadata(self):\n        return base_plugin.FrontendMetadata(\n            es_module_path=\"/index.js\",\n            disable_reload=False,\n            tab_name=\"Fairness Indicators\",\n            remove_dom=False,\n            element_name=None,\n        )\n\n    def is_active(self):\n        \"\"\"Determines whether this plugin is active.\n\n        This plugin is only active if TensorBoard sampled any summaries\n        relevant to the plugin.\n\n        Returns\n        -------\n          Whether this plugin is active.\n        \"\"\"\n        return bool(\n            self._multiplexer.PluginRunToTagToContent(\n                FairnessIndicatorsPlugin.plugin_name\n            )\n        )\n\n    # pytype: disable=wrong-arg-types\n    @wrappers.Request.application\n    def _serve_js(self, request):\n        filepath = os.path.join(os.path.dirname(__file__), \"static\", \"index.js\")\n        with open(filepath) as infile:\n            contents = infile.read()\n        return http_util.Respond(\n            request, contents, content_type=\"application/javascript\"\n        )\n\n    @wrappers.Request.application\n    def _serve_vulcanized_js(self, request):\n        with open(_TEMPLATE_LOCATION) as infile:\n            contents = infile.read()\n        return http_util.Respond(\n            request, contents, content_type=\"application/javascript\"\n        )\n\n    @wrappers.Request.application\n    def _get_evaluation_result(self, request):\n        run = request.args.get(\"run\")\n        try:\n            run = six.ensure_text(run)\n        except (UnicodeDecodeError, AttributeError):\n            pass\n\n        data = []\n        try:\n            eval_result_output_dir = six.ensure_text(\n                self._multiplexer.Tensors(run, FairnessIndicatorsPlugin.plugin_name)[\n                    0\n                ].tensor_proto.string_val[0]\n            )\n            eval_result = tfma.load_eval_result(output_path=eval_result_output_dir)\n            # TODO(b/141283811): Allow users to choose different model output names\n            # and class keys in case of multi-output and multi-class model.\n            data = convert_slicing_metrics_to_ui_input(eval_result.slicing_metrics)\n        except (KeyError, json_format.ParseError) as error:\n            logging.info(\"Error while fetching evaluation data, %s\", error)\n        return http_util.Respond(request, data, content_type=\"application/json\")\n\n    def _get_output_file_format(self, evaluation_output_path):\n        file_format = os.path.splitext(evaluation_output_path)[1]\n        if file_format:\n            return file_format[1:]\n\n        return \"\"\n\n    @wrappers.Request.application\n    def _get_evaluation_result_from_remote_path(self, request):\n        evaluation_output_path = request.args.get(\"evaluation_output_path\")\n        try:\n            evaluation_output_path = six.ensure_text(evaluation_output_path)\n        except (UnicodeDecodeError, AttributeError):\n            pass\n        try:\n            eval_result = tfma.load_eval_result(\n                os.path.dirname(evaluation_output_path),\n                output_file_format=self._get_output_file_format(evaluation_output_path),\n            )\n            data = convert_slicing_metrics_to_ui_input(eval_result.slicing_metrics)\n        except (KeyError, json_format.ParseError) as error:\n            logging.info(\"Error while fetching evaluation data, %s\", error)\n            data = []\n        return http_util.Respond(request, data, content_type=\"application/json\")\n\n    # pytype: enable=wrong-arg-types\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin_test.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests the Tensorboard Fairness Indicators plugin.\"\"\"\n\nimport os\nimport shutil\nfrom collections import abc\nfrom unittest import mock\n\nimport pytest\nimport six\nimport tensorflow.compat.v1 as tf\nimport tensorflow.compat.v2 as tf2\nimport tensorflow_model_analysis as tfma\nfrom google.protobuf import text_format\nfrom tensorboard.backend import application\nfrom tensorboard.backend.event_processing import (\n    plugin_event_multiplexer as event_multiplexer,\n)\nfrom tensorboard.plugins import base_plugin\nfrom tensorflow_model_analysis.utils import example_keras_model\nfrom werkzeug import test as werkzeug_test\nfrom werkzeug import wrappers\n\nfrom tensorboard_plugin_fairness_indicators import plugin, summary_v2\n\ntf.enable_eager_execution()\ntf = tf2\n\n\nclass PluginTest(tf.test.TestCase):\n    \"\"\"Tests for Fairness Indicators plugin server.\"\"\"\n\n    def setUp(self):\n        super(PluginTest, self).setUp()\n        # Log dir to save temp events into.\n        self._log_dir = self.get_temp_dir()\n        self._eval_result_output_dir = os.path.join(self.get_temp_dir(), \"eval_result\")\n        if not os.path.isdir(self._eval_result_output_dir):\n            os.mkdir(self._eval_result_output_dir)\n\n        writer = tf.summary.create_file_writer(self._log_dir)\n\n        with writer.as_default():\n            summary_v2.FairnessIndicators(self._eval_result_output_dir, step=1)\n        writer.close()\n\n        # Start a server that will receive requests.\n        self._multiplexer = event_multiplexer.EventMultiplexer(\n            {\n                \".\": self._log_dir,\n            }\n        )\n        self._context = base_plugin.TBContext(\n            logdir=self._log_dir, multiplexer=self._multiplexer\n        )\n        self._plugin = plugin.FairnessIndicatorsPlugin(self._context)\n        self._multiplexer.Reload()\n        wsgi_app = application.TensorBoardWSGI([self._plugin])\n        self._server = werkzeug_test.Client(wsgi_app, wrappers.Response)\n        self._routes = self._plugin.get_plugin_apps()\n\n    def tearDown(self):\n        super(PluginTest, self).tearDown()\n        shutil.rmtree(self._log_dir, ignore_errors=True)\n\n    def _export_keras_model(self, classifier):\n        temp_eval_export_dir = os.path.join(self.get_temp_dir(), \"eval_export_dir\")\n        classifier.compile(optimizer=tf.keras.optimizers.Adam(), loss=\"mse\")\n        tf.saved_model.save(classifier, temp_eval_export_dir)\n        return temp_eval_export_dir\n\n    def _write_tf_examples_to_tfrecords(self, examples):\n        data_location = os.path.join(self.get_temp_dir(), \"input_data.rio\")\n        with tf.io.TFRecordWriter(data_location) as writer:\n            for example in examples:\n                writer.write(example.SerializeToString())\n        return data_location\n\n    def _make_example(self, age, language, label):\n        example = tf.train.Example()\n        example.features.feature[\"age\"].float_list.value[:] = [age]\n        example.features.feature[\"language\"].bytes_list.value[:] = [\n            six.ensure_binary(language, \"utf8\")\n        ]\n        example.features.feature[\"label\"].float_list.value[:] = [label]\n        return example\n\n    def _make_eval_config(self):\n        return text_format.Parse(\n            \"\"\"\n        model_specs {\n          signature_name: \"serving_default\"\n          prediction_key: \"predictions\" # placeholder\n          label_key: \"label\" # placeholder\n        }\n        slicing_specs {}\n        metrics_specs {\n          metrics {\n            class_name: \"ExampleCount\"\n          }\n          metrics {\n            class_name: \"Accuracy\"\n          }\n        }\n  \"\"\",\n            tfma.EvalConfig(),\n        )\n\n    def testRoutes(self):\n        self.assertIsInstance(self._routes[\"/get_evaluation_result\"], abc.Callable)\n        self.assertIsInstance(\n            self._routes[\"/get_evaluation_result_from_remote_path\"], abc.Callable\n        )\n        self.assertIsInstance(self._routes[\"/index.js\"], abc.Callable)\n        self.assertIsInstance(self._routes[\"/vulcanized_tfma.js\"], abc.Callable)\n\n    @mock.patch.object(\n        event_multiplexer.EventMultiplexer,\n        \"PluginRunToTagToContent\",\n        return_value={\"bar\": {\"foo\": b\"\"}},\n    )\n    def testIsActive(self, get_random_stub):  # pylint: disable=unused-argument\n        self.assertTrue(self._plugin.is_active())\n\n    @mock.patch.object(\n        event_multiplexer.EventMultiplexer, \"PluginRunToTagToContent\", return_value={}\n    )\n    def testIsInactive(self, get_random_stub):  # pylint: disable=unused-argument\n        self.assertFalse(self._plugin.is_active())\n\n    def testIndexJsRoute(self):\n        \"\"\"Tests that the /tags route offers the correct run to tag mapping.\"\"\"\n        response = self._server.get(\"/data/plugin/fairness_indicators/index.js\")\n        self.assertEqual(200, response.status_code)\n\n    @pytest.mark.xfail(\n        reason=(\n            \"Failing on `master` as of `942b672457e07ac2ac27de0bcc45a4c80276785c`. \"\n            \"Please remove once fixed.\"\n        )\n    )\n    def testVulcanizedTemplateRoute(self):\n        \"\"\"Tests that the /tags route offers the correct run to tag mapping.\"\"\"\n        response = self._server.get(\n            \"/data/plugin/fairness_indicators/vulcanized_tfma.js\"\n        )\n        self.assertEqual(200, response.status_code)\n\n    def testGetEvalResultsRoute(self):\n        model_location = self._export_keras_model(\n            example_keras_model.get_example_classifier_model(\n                input_feature_key=\"language\"\n            )\n        )\n        examples = [\n            self._make_example(age=3.0, language=\"english\", label=1.0),\n            self._make_example(age=3.0, language=\"chinese\", label=0.0),\n            self._make_example(age=4.0, language=\"english\", label=1.0),\n            self._make_example(age=5.0, language=\"chinese\", label=1.0),\n            self._make_example(age=5.0, language=\"hindi\", label=1.0),\n        ]\n        eval_config = self._make_eval_config()\n        data_location = self._write_tf_examples_to_tfrecords(examples)\n        _ = tfma.run_model_analysis(\n            eval_shared_model=tfma.default_eval_shared_model(\n                eval_saved_model_path=model_location, eval_config=eval_config\n            ),\n            eval_config=eval_config,\n            data_location=data_location,\n            output_path=self._eval_result_output_dir,\n        )\n\n        response = self._server.get(\n            \"/data/plugin/fairness_indicators/get_evaluation_result?run=.\"\n        )\n        self.assertEqual(200, response.status_code)\n\n    def testGetEvalResultsFromURLRoute(self):\n        model_location = self._export_keras_model(\n            example_keras_model.get_example_classifier_model(\n                input_feature_key=\"language\"\n            )\n        )\n        examples = [\n            self._make_example(age=3.0, language=\"english\", label=1.0),\n            self._make_example(age=3.0, language=\"chinese\", label=0.0),\n            self._make_example(age=4.0, language=\"english\", label=1.0),\n            self._make_example(age=5.0, language=\"chinese\", label=1.0),\n            self._make_example(age=5.0, language=\"hindi\", label=1.0),\n        ]\n        eval_config = self._make_eval_config()\n        data_location = self._write_tf_examples_to_tfrecords(examples)\n        _ = tfma.run_model_analysis(\n            eval_shared_model=tfma.default_eval_shared_model(\n                eval_saved_model_path=model_location, eval_config=eval_config\n            ),\n            eval_config=eval_config,\n            data_location=data_location,\n            output_path=self._eval_result_output_dir,\n        )\n\n        response = self._server.get(\n            \"/data/plugin/fairness_indicators/\"\n            + \"get_evaluation_result_from_remote_path?evaluation_output_path=\"\n            + os.path.join(self._eval_result_output_dir, tfma.METRICS_KEY)\n        )\n        self.assertEqual(200, response.status_code)\n\n    def testGetOutputFileFormat(self):\n        self.assertEqual(\"\", self._plugin._get_output_file_format(\"abc_path\"))\n        self.assertEqual(\n            \"tfrecord\", self._plugin._get_output_file_format(\"abc_path.tfrecord\")\n        )\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/static/index.js",
    "content": "\n// Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n// ==============================================================================\n\n/** Function render Fairness Indicators UI. */\nexport async function render() {\n  const script = document.createElement('script');\n  script.src = \"./vulcanized_tfma.js\";\n  document.body.appendChild(script);\n\n  const container = document.createElement('fairness-tensorboard-container');\n  document.body.appendChild(container);\n}\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Summaries for Fairness Indicators plugin.\"\"\"\n\nfrom tensorboard.compat import tf2 as tf\n\nfrom tensorboard_plugin_fairness_indicators import metadata\n\n\ndef FairnessIndicators(eval_result_output_dir, step=None, description=None):\n    \"\"\"Write a Fairness Indicators summary.\n\n    Arguments:\n    ---------\n      eval_result_output_dir: Directory output created by\n        tfma.model_eval_lib.ExtractEvaluateAndWriteResults API, which contains\n        'metrics' file having MetricsForSlice results.\n      step: Explicit `int64`-castable monotonic step value for this summary. If\n        omitted, this defaults to `tf.summary.experimental.get_step()`, which must\n        not be None.\n      description: Optional long-form description for this summary, as a constant\n        `str`. Markdown is supported. Defaults to empty.\n\n    Returns:\n    -------\n      True on success, or false if no summary was written because no default\n      summary writer was available.\n\n    Raises:\n    ------\n      ValueError: if a default writer exists, but no step was provided and\n        `tf.summary.experimental.get_step()` is None.\n    \"\"\"\n    with tf.summary.experimental.summary_scope(metadata.PLUGIN_NAME):\n        return tf.summary.write(\n            tag=metadata.PLUGIN_NAME,\n            tensor=tf.constant(eval_result_output_dir),\n            step=step,\n            metadata=metadata.CreateSummaryMetadata(description),\n        )\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2_test.py",
    "content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests for Fairness Indicators summary.\"\"\"\n\nimport glob\nimport os\n\nimport six\nimport tensorflow.compat.v1 as tf\nfrom tensorboard.compat import tf2\n\nfrom tensorboard_plugin_fairness_indicators import metadata, summary_v2\n\ntry:\n    tf2.__version__  # Force lazy import to resolve\nexcept ImportError:\n    tf2 = None\n\ntry:\n    tf.enable_eager_execution()\nexcept AttributeError:\n    # TF 2.0 doesn't have this symbol because eager is the default.\n    pass\n\n\nclass SummaryV2Test(tf.test.TestCase):\n    def _write_summary(self, eval_result_output_dir):\n        writer = tf2.summary.create_file_writer(self.get_temp_dir())\n        with writer.as_default():\n            summary_v2.FairnessIndicators(eval_result_output_dir, step=1)\n        writer.close()\n\n    def _get_event(self):\n        event_files = sorted(glob.glob(os.path.join(self.get_temp_dir(), \"*\")))\n        self.assertEqual(len(event_files), 1)\n        events = list(tf.train.summary_iterator(event_files[0]))\n        # Expect a boilerplate event for the file_version, then the summary one.\n        self.assertEqual(len(events), 2)\n        return events[1]\n\n    def testSummary(self):\n        self._write_summary(\"output_dir\")\n        event = self._get_event()\n\n        self.assertEqual(1, event.step)\n\n        summary_value = event.summary.value[0]\n        self.assertEqual(metadata.PLUGIN_NAME, summary_value.tag)\n        self.assertEqual(\n            \"output_dir\", six.ensure_text(summary_value.tensor.string_val[0], \"utf-8\")\n        )\n        self.assertEqual(\n            metadata.PLUGIN_NAME, summary_value.metadata.plugin_data.plugin_name\n        )\n"
  },
  {
    "path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/version.py",
    "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Contains the version string of Fairness Indicators Tensorboard Plugin.\"\"\"\n\n# Note that setup.py uses this version.\n__version__ = \"0.49.0.dev\"\n"
  }
]