Showing preview only (466K chars total). Download the full file or copy to clipboard to get everything.
Repository: tensorflow/fairness-indicators
Branch: master
Commit: 6c970e0ec6c5
Files: 64
Total size: 442.6 KB
Directory structure:
gitextract_8nbht4qq/
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── 00-bug-issue.md
│ │ ├── 10-build-installation-issue.md
│ │ ├── 20-documentation-issue.md
│ │ ├── 30-feature-request.md
│ │ ├── 40-performance-issue.md
│ │ └── 50-other-issues.md
│ ├── actions/
│ │ └── setup-env/
│ │ └── action.yml
│ └── workflows/
│ ├── build.yml
│ ├── ci-lint.yml
│ ├── docs.yml
│ └── test.yml
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── RELEASE.md
├── docs/
│ ├── __init__.py
│ ├── guide/
│ │ ├── _index.yaml
│ │ ├── _toc.yaml
│ │ └── guidance.md
│ ├── index.md
│ ├── javascripts/
│ │ └── mathjax.js
│ ├── stylesheets/
│ │ └── extra.css
│ └── tutorials/
│ ├── Facessd_Fairness_Indicators_Example_Colab.ipynb
│ ├── Fairness_Indicators_Example_Colab.ipynb
│ ├── Fairness_Indicators_Pandas_Case_Study.ipynb
│ ├── Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
│ ├── Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb
│ ├── Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb
│ ├── Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb
│ ├── README.md
│ ├── _Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb
│ └── _toc.yaml
├── fairness_indicators/
│ ├── __init__.py
│ ├── example_model.py
│ ├── example_model_test.py
│ ├── fairness_indicators_metrics.py
│ ├── remediation/
│ │ ├── __init__.py
│ │ ├── weight_utils.py
│ │ └── weight_utils_test.py
│ ├── test_cases/
│ │ └── dlvm/
│ │ ├── fairness_indicators_dlvm_test_case.ipynb
│ │ └── fi_test_installed.sh
│ ├── tutorial_utils/
│ │ ├── __init__.py
│ │ ├── util.py
│ │ └── util_test.py
│ └── version.py
├── mkdocs.yml
├── pyproject.toml
├── requirements-docs.txt
├── setup.py
└── tensorboard_plugin/
├── README.md
├── pytest.ini
├── setup.py
└── tensorboard_plugin_fairness_indicators/
├── RELEASE.md
├── __init__.py
├── demo.py
├── metadata.py
├── metadata_test.py
├── plugin.py
├── plugin_test.py
├── static/
│ └── index.js
├── summary_v2.py
├── summary_v2_test.py
└── version.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/ISSUE_TEMPLATE/00-bug-issue.md
================================================
---
name: Bug Issue
about: Use this template for reporting a bug
labels: 'type:bug'
---
**System information**
- Have I written custom code (as opposed to using stock example code provided):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Fairness Indicators version:
- TensorFlow version:
- Python version:
**Describe the current behavior**
**Describe the expected behavior**
**Standalone code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate
the problem. If possible, please share a link to Colab/Jupyter/any notebook.
**Other info / logs** Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
================================================
FILE: .github/ISSUE_TEMPLATE/10-build-installation-issue.md
================================================
---
name: Build/Installation Issue
about: Use this template for build/installation issues
labels: 'type:build/install'
---
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Fairness Indicators version:
- Python version:
- Pip version:
**Describe the problem**
**Provide the exact sequence of commands / steps that you executed before running into the problem**
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
================================================
FILE: .github/ISSUE_TEMPLATE/20-documentation-issue.md
================================================
---
name: Documentation Issue
about: Use this template for documentation related issues
labels: 'type:docs'
---
The Fairness Indicators docs are open source! To get involved, read the
documentation contributor guide:
https://github.com/tensorflow/fairness-indicators/blob/master/CONTRIBUTING.md
## URL(s) with the issue:
Please provide a link to the documentation entry.
## Description of issue (what needs changing):
### Clear description
For example, why should someone use this method? How is it useful?
### Correct links
Is the link to the source code correct?
### Parameters defined
Are all parameters defined and formatted correctly?
### Returns defined
Are return values defined?
### Raises listed and defined
Are the errors defined?
### Usage example
Is there currently a usage example for this method?
### Request visuals, if applicable
Are there currently visuals? If not, will it clarify the content?
### Submit a pull request?
Are you planning to also submit a pull request to fix the issue?
================================================
FILE: .github/ISSUE_TEMPLATE/30-feature-request.md
================================================
---
name: Feature Request
about: Use this template for raising a feature request
labels: 'type:feature'
---
**Describe the feature and the current behavior/state.**
**Will this change the current api? How?**
**Who will benefit with this feature?**
**Are you willing to contribute it (Yes/No).**
**Any Other info.**
================================================
FILE: .github/ISSUE_TEMPLATE/40-performance-issue.md
================================================
---
name: Performance Issue
about: Use this template for reporting a performance issue
labels: 'type:performance'
---
**System information**
- Have I written custom code (as opposed to using stock example code provided):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Fairness Indicators version:
- TensorFlow version:
- Python version:
**Describe the current behavior**
**Describe the expected behavior**
**Standalone code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate
the problem. If possible, please share a link to Colab/Jupyter/any notebook.
**Other info / logs** Include any logs or source code that would be helpful to
diagnose the problem. If including tracebacks, please include the full
traceback. Large logs and files should be attached.
================================================
FILE: .github/ISSUE_TEMPLATE/50-other-issues.md
================================================
---
name: Other Issues
about: Use this template for any other non-support related issues
labels: 'type:others'
---
This template is for miscellaneous issues not covered by the other categories.
================================================
FILE: .github/actions/setup-env/action.yml
================================================
name: Set up environment
description: Set up environment and install package
inputs:
python-version:
default: "3.10"
required: true
package-root-dir:
default: "./"
required: true
runs:
using: composite
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
cache-dependency-path: |
${{ inputs.package-root-dir }}/setup.py
- name: Install dependencies
shell: bash
run: |
python -m pip install --upgrade pip
pip install ${{ inputs.package-root-dir }}[test]
================================================
FILE: .github/workflows/build.yml
================================================
name: Build
on:
push:
branches:
- master
pull_request:
branches:
- master
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10"]
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install python build dependencies
run: |
python -m pip install --upgrade pip build
- name: Build wheels
run: |
python -m build --wheel --sdist
mkdir wheelhouse
mv dist/* wheelhouse/
- name: List and check wheels
run: |
pip install twine pkginfo>=1.11.0
${{ matrix.ls || 'ls -lh' }} wheelhouse/
twine check wheelhouse/*
- name: Upload wheels
uses: actions/upload-artifact@v4
with:
name: wheels-${{ matrix.python-version }}
path: ./wheelhouse/*
upload_to_pypi:
name: Upload to PyPI
runs-on: ubuntu-latest
if: (github.event_name == 'release' && startsWith(github.ref, 'refs/tags')) || (github.event_name == 'workflow_dispatch')
needs: [build]
environment:
name: pypi
url: https://pypi.org/p/fairness-indicators
permissions:
id-token: write
steps:
- name: Retrieve wheels
uses: actions/download-artifact@v4.1.8
with:
merge-multiple: true
path: wheels
- name: List the build artifacts
run: |
ls -lAs wheels/
- name: Upload to PyPI
uses: pypa/gh-action-pypi-publish@release/v1.12
with:
packages_dir: wheels/
repository_url: https://pypi.org/legacy/
verify_metadata: false
verbose: true
================================================
FILE: .github/workflows/ci-lint.yml
================================================
name: pre-commit
on:
pull_request:
push:
branches: [master]
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4.1.7
with:
# Ensure the full history is fetched
# This is required to run pre-commit on a specific set of commits
# TODO: Remove this when all the pre-commit issues are fixed
fetch-depth: 0
- uses: actions/setup-python@v5.1.1
with:
python-version: 3.13
- uses: pre-commit/action@v3.0.1
================================================
FILE: .github/workflows/docs.yml
================================================
name: Deploy docs
on:
workflow_dispatch:
push:
branches:
- 'master'
pull_request:
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
if: (github.event_name != 'pull_request')
- name: Set up Python 3.9
uses: actions/setup-python@v5
with:
python-version: '3.9'
cache: 'pip'
cache-dependency-path: |
setup.py
requirements-docs.txt
- name: Save time for cache for mkdocs
run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- name: Caching
uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Dependencies
run: pip install -r requirements-docs.txt
- name: Deploy to GitHub Pages
run: mkdocs gh-deploy --force
if: (github.event_name != 'pull_request')
- name: Build docs to check for errors
run: mkdocs build
if: (github.event_name == 'pull_request')
================================================
FILE: .github/workflows/test.yml
================================================
name: Tests
on:
push:
paths-ignore:
- '**.md'
- 'docs/**'
pull_request:
branches: [ master ]
paths-ignore:
- '**.md'
- 'docs/**'
workflow_dispatch:
jobs:
tests:
if: github.actor != 'copybara-service[bot]'
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.9', '3.10']
package-root-dir: ['./', './tensorboard_plugin']
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Set up environment
uses: ./.github/actions/setup-env
with:
python-version: ${{ matrix.python-version }}
package-root-dir: ${{ matrix.package-root-dir }}
- name: Run tests
shell: bash
run: |
cd ${{ matrix.package-root-dir }}
pytest
================================================
FILE: .pre-commit-config.yaml
================================================
# pre-commit is a tool to perform a predefined set of tasks manually and/or
# automatically before git commits are made.
#
# Config reference: https://pre-commit.com/#pre-commit-configyaml---top-level
#
# Common tasks
#
# - Register git hooks: pre-commit install --install-hooks
# - Run on all files: pre-commit run --all-files
#
# These pre-commit hooks are run as CI.
#
# NOTE: if it can be avoided, add configs/args in pyproject.toml or below instead of creating a new `.config.file`.
# https://pre-commit.ci/#configuration
ci:
autoupdate_schedule: monthly
autofix_commit_msg: |
[pre-commit.ci] Apply automatic pre-commit fixes
repos:
# general
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: end-of-file-fixer
exclude: '\.svg$'
- id: trailing-whitespace
exclude: '\.svg$'
- id: check-json
- id: check-yaml
args: [--allow-multiple-documents, --unsafe]
- id: check-toml
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.6
hooks:
- id: ruff
args: ["--fix"]
- id: ruff-format
================================================
FILE: CONTRIBUTING.md
================================================
# How to Contribute
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.
## Contributor License Agreement
Contributions to this project must be accompanied by a Contributor License
Agreement. You (or your employer) retain the copyright to your contribution,
this simply gives us permission to use and redistribute your contributions as
part of the project. Head over to <https://cla.developers.google.com/> to see
your current agreements on file or to sign a new one.
You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
## Code reviews
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
information on using pull requests.
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017, The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------------------
MIT
The MIT License (MIT)
Copyright (c) 2014-2015, Jon Schlinkert.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
--------------------------------------------------------------------------------
BSD-3-Clause
Copyright (c) 2016, Daniel Wirtz All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of its author, nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================
FILE: README.md
================================================
# Fairness Indicators

Fairness Indicators is designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.
The tool is currently actively used internally by many of our products. We would love to partner with you to understand where Fairness Indicators is most useful, and where added functionality would be valuable. Please reach out at tfx@tensorflow.org. You can provide feedback and feature requests [here](https://github.com/tensorflow/fairness-indicators/issues/new/choose).
## Key links
* [Introductory Video](https://www.youtube.com/watch?v=pHT-ImFXPQo)
* [Fairness Indicators Case Study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)
* [Fairness Indicators Example Colab](https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb)
* [Pandas DataFrame to Fairness Indicators Case Study](https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb)
* [Fairness Indicators: Thinking about Fairness Evaluation](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/guide/guidance.md)
## What is Fairness Indicators?
Fairness Indicators enables easy computation of commonly-identified fairness metrics for **binary** and **multiclass** classifiers.
Many existing tools for evaluating fairness concerns don’t work well on large-scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate fairenss metrics across any size of use case.
In particular, Fairness Indicators includes the ability to:
* Evaluate the distribution of datasets
* Evaluate model performance, sliced across defined groups of users
* Feel confident about your results with confidence intervals and evals at multiple thresholds
* Dive deep into individual slices to explore root causes and opportunities for improvement
This [case study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body), complete with [videos](https://www.youtube.com/watch?v=pHT-ImFXPQo) and programming exercises, demonstrates how Fairness Indicators can be used on one of your own products to evaluate fairness concerns over time.
[](http://www.youtube.com/watch?v=pHT-ImFXPQo "")
## [Installation](https://pypi.org/project/fairness-indicators/)
`pip install fairness-indicators`
The pip package includes:
* [**Tensorflow Data Validation (TFDV)**](https://github.com/tensorflow/data-validation) - analyze the distribution of your dataset
* [**Tensorflow Model Analysis (TFMA)**](https://github.com/tensorflow/model-analysis) - analyze model performance
* **Fairness Indicators** - an addition to TFMA that adds fairness metrics and easy performance comparison across slices
* **The What-If Tool (WIT)**](https://github.com/PAIR-code/what-if-tool - an interactive visual interface designed to probe your models better
### Nightly Packages
Fairness Indicators also hosts nightly packages at
https://pypi-nightly.tensorflow.org on Google Cloud. To install the latest
nightly package, please use the following command:
```bash
pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators
```
This will install the nightly packages for the major dependencies of Fairness
Indicators such as TensorFlow Data Validation (TFDV), TensorFlow Model Analysis
(TFMA).
## How can I use Fairness Indicators?
Tensorflow Models
* Access Fairness Indicators as part of the Evaluator component in Tensorflow Extended \[[docs](https://www.tensorflow.org/tfx/guide/evaluator)]
* Access Fairness Indicators in Tensorboard when evaluating other real-time metrics \[[docs](https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md)]
Not using existing Tensorflow tools? No worries!
* Download the Fairness Indicators pip package, and use Tensorflow Model Analysis as a standalone tool \[[docs](https://www.tensorflow.org/tfx/guide/fairness_indicators)]
* Model Agnostic TFMA enables you to compute Fairness Indicators based on the output of any model \[[docs](https://www.tensorflow.org/tfx/guide/fairness_indicators)]
## [Examples](https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials) directory contains several examples.
* [Fairness_Indicators_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb) gives an overview of Fairness Indicators in [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/guide/tfma) and how to use it with a real dataset. This notebook also goes over [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) and [What-If Tool](https://pair-code.github.io/what-if-tool/), two tools for analyzing TensorFlow models that are packaged with Fairness Indicators.
* [Fairness_Indicators_on_TF_Hub.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb) demonstrates how to use Fairness Indicators to compare models trained on different [text embeddings](https://en.wikipedia.org/wiki/Word_embedding). This notebook uses text embeddings from [TensorFlow Hub](https://www.tensorflow.org/hub), TensorFlow's library to publish, discover, and reuse model components.
* [Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb)
demonstrates how to visualize Fairness Indicators in TensorBoard.
## More questions?
For more information on how to think about fairness evaluation in the context of your use case, see [this link](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/guide/guidance.md).
If you have found a bug in Fairness Indicators, please file a [GitHub issue](https://github.com/tensorflow/fairness-indicators/issues/new/choose) with as much supporting information as you can provide.
## Compatible versions
The following table shows the package versions that are
compatible with each other. This is determined by our testing framework, but
other *untested* combinations may also work.
|fairness-indicators | tensorflow | tensorflow-data-validation | tensorflow-model-analysis |
|-------------------------------------------------------------------------------------------|--------------------|----------------------------|---------------------------|
|[GitHub master](https://github.com/tensorflow/fairness-indicators/blob/master/RELEASE.md) | nightly (1.x/2.x) | 1.17.0 | 0.48.0 |
|[v0.48.0](https://github.com/tensorflow/fairness-indicators/blob/v0.48.0/RELEASE.md) | 2.17 | 1.17.0 | 0.48.0 |
|[v0.47.0](https://github.com/tensorflow/fairness-indicators/blob/v0.47.0/RELEASE.md) | 2.16 | 1.16.1 | 0.47.1 |
|[v0.46.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/RELEASE.md) | 2.15 | 1.15.1 | 0.46.0 |
|[v0.44.0](https://github.com/tensorflow/fairness-indicators/blob/v0.44.0/RELEASE.md) | 2.12 | 1.13.0 | 0.44.0 |
|[v0.43.0](https://github.com/tensorflow/fairness-indicators/blob/v0.43.0/RELEASE.md) | 2.11 | 1.12.0 | 0.43.0 |
|[v0.42.0](https://github.com/tensorflow/fairness-indicators/blob/v0.42.0/RELEASE.md) | 1.15.5 / 2.10 | 1.11.0 | 0.42.0 |
|[v0.41.0](https://github.com/tensorflow/fairness-indicators/blob/v0.41.0/RELEASE.md) | 1.15.5 / 2.9 | 1.10.0 | 0.41.0 |
|[v0.40.0](https://github.com/tensorflow/fairness-indicators/blob/v0.40.0/RELEASE.md) | 1.15.5 / 2.9 | 1.9.0 | 0.40.0 |
|[v0.39.0](https://github.com/tensorflow/fairness-indicators/blob/v0.39.0/RELEASE.md) | 1.15.5 / 2.8 | 1.8.0 | 0.39.0 |
|[v0.38.0](https://github.com/tensorflow/fairness-indicators/blob/v0.38.0/RELEASE.md) | 1.15.5 / 2.8 | 1.7.0 | 0.38.0 |
|[v0.37.0](https://github.com/tensorflow/fairness-indicators/blob/v0.37.0/RELEASE.md) | 1.15.5 / 2.7 | 1.6.0 | 0.37.0 |
|[v0.36.0](https://github.com/tensorflow/fairness-indicators/blob/v0.36.0/RELEASE.md) | 1.15.2 / 2.7 | 1.5.0 | 0.36.0 |
|[v0.35.0](https://github.com/tensorflow/fairness-indicators/blob/v0.35.0/RELEASE.md) | 1.15.2 / 2.6 | 1.4.0 | 0.35.0 |
|[v0.34.0](https://github.com/tensorflow/fairness-indicators/blob/v0.34.0/RELEASE.md) | 1.15.2 / 2.6 | 1.3.0 | 0.34.0 |
|[v0.33.0](https://github.com/tensorflow/fairness-indicators/blob/v0.33.0/RELEASE.md) | 1.15.2 / 2.5 | 1.2.0 | 0.33.0 |
|[v0.30.0](https://github.com/tensorflow/fairness-indicators/blob/v0.30.0/RELEASE.md) | 1.15.2 / 2.4 | 0.30.0 | 0.30.0 |
|[v0.29.0](https://github.com/tensorflow/fairness-indicators/blob/v0.29.0/RELEASE.md) | 1.15.2 / 2.4 | 0.29.0 | 0.29.0 |
|[v0.28.0](https://github.com/tensorflow/fairness-indicators/blob/v0.28.0/RELEASE.md) | 1.15.2 / 2.4 | 0.28.0 | 0.28.0 |
|[v0.27.0](https://github.com/tensorflow/fairness-indicators/blob/v0.27.0/RELEASE.md) | 1.15.2 / 2.4 | 0.27.0 | 0.27.0 |
|[v0.26.0](https://github.com/tensorflow/fairness-indicators/blob/v0.26.0/RELEASE.md) | 1.15.2 / 2.3 | 0.26.0 | 0.26.0 |
|[v0.25.0](https://github.com/tensorflow/fairness-indicators/blob/v0.25.0/RELEASE.md) | 1.15.2 / 2.3 | 0.25.0 | 0.25.0 |
|[v0.24.0](https://github.com/tensorflow/fairness-indicators/blob/v0.24.0/RELEASE.md) | 1.15.2 / 2.3 | 0.24.0 | 0.24.0 |
|[v0.23.0](https://github.com/tensorflow/fairness-indicators/blob/v0.23.0/RELEASE.md) | 1.15.2 / 2.3 | 0.23.0 | 0.23.0 |
================================================
FILE: RELEASE.md
================================================
<!-- mdlint off(HEADERS_TOO_MANY_H1) -->
# Current Version (Still in Development)
## Major Features and Improvements
## Bug Fixes and Other Changes
## Breaking Changes
## Deprecations
# Version 0.48.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow>=2.17,<2.18`.
* Depends on `tensorflow-data-validation>=1.17.0,<1.18.0`.
* Depends on `tensorflow-model-analysis>=0.48,<0.49`.
* Depends on `protobuf>=4.21.6,<6.0.0`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.47.0
## Major Features and Improvements
* Add fairness indicator metrics in the third_party library.
## Bug Fixes and Other Changes
* Depends on `tensorflow>=2.16,<2.17`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.46.0
## Major Features and Improvements
* Update example model to use Keras models instead of estimators.
## Bug Fixes and Other Changes
* N/A
## Breaking Changes
* N/A
## Deprecations
* Deprecated python 3.8 support
# Version 0.44.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow>=2.12.0,<2.13`.
* Depends on `tensorflow-data-validation>=1.13.0,<1.14.0`.
* Depends on `tensorflow-model-analysis>=0.44,<0.45`.
* Depends on `protobuf>=3.20.3,<5`.
## Breaking Changes
* N/A
## Deprecations
* Deprecating python3.7 support.
# Version 0.43.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow>=2.11,<2.12`
* Depends on `tensorflow-data-validation>=1.11.0,<1.12.0`.
* Depends on `tensorflow-model-analysis>=0.42,<0.43`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.42.0
## Major Features and Improvements
* This is the last version that supports TensorFlow 1.15.x. TF 1.15.x support
will be removed in the next version. Please check the
[TF2 migration guide](https://www.tensorflow.org/guide/migrate) to migrate
to TF2.
## Bug Fixes and Other Changes
* N/A
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.41.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.10.0,<1.11.0`.
* Depends on `tensorflow-model-analysis>=0.41,<0.42`.
* Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,!=2.9.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.40.0
## Major Features and Improvements
* Allow counterfactual metrics to be calculated from predictions instead of
only features.
* Add precision and recall to the set of fairness indicators metrics.
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.9.0,<1.10.0`.
* Depends on `tensorflow-model-analysis>=0.40,<0.41`.
* Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.39.0
## Major Features and Improvements
* Allow counterfactual metrics to be calculated from predictions instead of
only features.
* Add precision and recall to the set of fairness indicators metrics.
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.8.0,<1.9.0`.
* Depends on `tensorflow-model-analysis>=0.39,<0.40`.
* Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.38.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.7.0,<1.8.0`.
* Depends on `tensorflow-model-analysis>=0.38,<0.39`.
* Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.37.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Fix Fairness Indicators UI bug with overlapping charts when comparing EvalResults
* Depends on `tensorflow-data-validation>=1.6.0,<1.7.0`.
* Depends on `tensorflow-model-analysis>=0.37,<0.38`.
* Depends on `tensorflow>=1.15.5,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.36.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.5.0,<1.6.0`.
* Depends on `tensorflow-model-analysis>=0.36,<0.37`.
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<3`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.35.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=1.4.0,<1.5.0`.
* Depends on `tensorflow-model-analysis>=0.35,<0.36`.
## Breaking Changes
* N/A
## Deprecations
* Deprecating python 3.6 support.
# Version 0.34.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,<3`.
* Depends on `tensorflow-data-validation>=1.3.0,<1.4.0`.
* Depends on `tensorflow-model-analysis>=0.34,<0.35`.
## Breaking Changes
* Drop Py2 support.
## Deprecations
* N/A
# Version 0.33.0
## Major Features and Improvements
* Porting Counterfactual Fairness metrics into FI UI.
## Bug Fixes and Other Changes
* Improve rendering of HTML stubs for Fairness Indicators UI
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3`.
* Depends on `protobuf>=3.13,<4`.
* Depends on `tensorflow-data-validation>=1.2.0,<1.3.0`.
* Depends on `tensorflow-model-analysis>=0.33,<0.34`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.30.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.5.*,<3`.
* Depends on `tensorflow-data-validation>=0.30,<0.31`.
* Depends on `tensorflow-model-analysis>=0.30,<0.31`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.29.0
## Major Features and Improvements
* N/A
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=0.29,<0.30`.
* Depends on `tensorflow-model-analysis>=0.29,<0.30`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.28.0
## Major Features and Improvements
* In Fairness Indicators UI, sort metrics list to show common metrics first
* For lift, support negative values in bar chart.
* Adding two new metrics - Flip Count and Flip Rate to evaluate Counterfactual
Fairness.
* Add Lift metrics under addons/fairness.
* Porting Lift metrics into FI UI.
## Bug Fixes and Other Changes
* Depends on `tensorflow-data-validation>=0.28,<0.29`.
* Depends on `tensorflow-model-analysis>=0.28,<0.29`.
## Breaking Changes
* N/A
## Deprecations
* N/A
# Version 0.27.0
## Major Features and Improvements
* N/A
## Bug fixes and other changes
* Added test cases for DLVM testing.
* Move the util files to a seperate folder.
* Add `tensorflow-hub` as a dependency because it's used inside the
example_model.py.
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,<3`.
* Depends on `tensorflow-data-validation>=0.27,<0.28`.
* Depends on `tensorflow-model-analysis>=0.27,<0.28`.
## Breaking changes
* N/A
## Deprecations
* N/A
# Version 0.26.0
## Major Features and Improvements
* Sorting fairness metrics table rows to keep slices in order with slice drop
down in the UI.
## Bug fixes and other changes
* Update fairness_indicators.documentation.examples.util to TensorFlow 2.0.
* Table now displays 3 decimal places instead of 2.
* Fix the bug that metric list won't refresh if the input eval result changed.
* Remove d3-tip dependency.
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.4.*,<3`.
* Depends on `tensorflow-data-validation>=0.26,<0.27`.
* Depends on `tensorflow-model-analysis>=0.26,<0.27`.
## Breaking changes
* N/A
## Deprecations
* N/A
# Version 0.25.0
## Major Features and Improvements
* Add workflow buttons to Fairness Indicators UI, providing tutorial on how to
configure metrics and parameters, and how to interpret the results.
* Add metric definitions as tooltips in the metric selector UI
* Removing prefix from metric names in graph titles in UI.
* From this release Fairness Indicators will also be hosting nightly packages
on https://pypi-nightly.tensorflow.org. To install the nightly package use
the following command:
```
pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators
```
Note: These nightly packages are unstable and breakages are likely to
happen. The fix could often take a week or more depending on the complexity
involved for the wheels to be available on the PyPI cloud service. You can
always use the stable version of Fairness Indicators available on PyPI by
running the command `pip install fairness-indicators` .
## Bug fixes and other changes
* Update table colors.
* Modify privacy note in Fairness Indicators UI.
* Depends on `tensorflow-data-validation>=0.25,<0.26`.
* Depends on `tensorflow-model-analysis>=0.25,<0.26`.
## Breaking changes
* N/A
## Deprecations
* N/A
# Version 0.24.0
## Major Features and Improvements
* Made the Fairness Indicators UI thresholds drop down list sorted.
## Bug fixes and other changes
* Fix in the issue where the Sort menu is not hidden when there is no model
comparison.
* Depends on `tensorflow-data-validation>=0.24,<0.25`.
* Depends on `tensorflow-model-analysis>=0.24,<0.25`.
## Breaking changes
* N/A
## Deprecations
* Deprecated Py3.5 support.
# Version 0.23.1
## Major Features and Improvements
* N/A
## Bug fixes and other changes
* Fix broken import path in Fairness_Indicators_Example_Colab and Fairness_Indicators_on_TF_Hub_Text_Embeddings.
## Breaking changes
* N/A
## Deprecations
* N/A
# Version 0.23.0
## Major Features and Improvements
* N/A
## Bug fixes and other changes
* Depends on `tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,<3`.
* Depends on `tensorflow-data-validation>=0.23,<0.24`.
* Depends on `tensorflow-model-analysis>=0.23,<0.24`.
## Breaking changes
* N/A
## Deprecations
* Deprecating Py2 support.
* Note: We plan to drop py3.5 support in the next release.
================================================
FILE: docs/__init__.py
================================================
# Copyright 2019 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
================================================
FILE: docs/guide/_index.yaml
================================================
book_path: /responsible_ai/_book.yaml
project_path: /responsible_ai/_project.yaml
title: Fairness Indicators
landing_page:
custom_css_path: /site-assets/css/style.css
nav: left
meta_tags:
- name: description
content: >
Fairness Indicators tool suite for TensorFlow.
rows:
- classname: devsite-landing-row-100
- heading: Fairness Indicators
options:
- description-50
items:
- description: >
<p>
Fairness Indicators is a library that enables easy computation of commonly-identified
fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool
suite, you can:
<ul>
<li>
Compute commonly-identified fairness metrics for classification models
</li>
<li>
Compare model performance across subgroups to a baseline, or to other models
</li>
<li>
Use confidence intervals to surface statistically significant disparities
</li>
<li>
Perform evaluation over multiple thresholds
</li>
</ul>
</p>
<p>
Use Fairness Indicators via the:
<ul>
<li>
<a href="https://www.tensorflow.org/tfx/guide/evaluator">Evaluator
component </a>in a <a href ="https://www.tensorflow.org/tfx">TFX pipeline</a>
</li>
<li>
<a href="https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md">
TensorBoard plugin</a>
</li>
<li>
<a href="https://www.tensorflow.org/tfx/guide/fairness_indicators">TensorFlow Model
Analysis library</a>
</li>
<li>
<a href="https://www.tensorflow.org/tfx/guide/fairness_indicators#using_fairness_indicators_with_non-tensorflow_models">Model
Agnostic TFMA library</a>
</li>
- code_block: |
<pre class = "prettyprint">
eval_config_pbtxt = """
model_specs {
label_key: "%s"
}
metrics_specs {
metrics {
class_name: "FairnessIndicators"
config: '{ "thresholds": [0.25, 0.5, 0.75] }'
}
metrics {
class_name: "ExampleCount"
}
}
slicing_specs {}
slicing_specs {
feature_keys: "%s"
}
options {
compute_confidence_intervals { value: False }
disabled_outputs{values: "analysis"}
}
""" % (LABEL_KEY, GROUP_KEY)
</pre>
- classname: devsite-landing-row-100
items:
- description: >
<h3>Resources</h3>
- classname: devsite-landing-row-cards
items:
- heading: "ML Practicum: Fairness in Perspective API using Fairness Indicators"
image_path: /responsible_ai/fairness_indicators/images/mlpracticum.png
path: "https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body"
buttons:
- label: "Try the Case Study"
path: "https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body"
- heading: "Fairness Indicators on the TensorFlow blog"
image_path: /resources/images/tf-logo-card-16x9.png
path: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html
buttons:
- label: "Read on the TensorFlow blog"
path: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html
- heading: "Fairness Indicators on GitHub"
image_path: /resources/images/github-card-16x9.png
path: https://github.com/tensorflow/fairness-indicators
buttons:
- label: "View on GitHub"
path: https://github.com/tensorflow/fairness-indicators
- classname: devsite-landing-row-cards
items:
- heading: "Fairness Indicators on the Google AI Blog"
image_path: /responsible_ai/fairness_indicators/images/googleai.png
path: https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html
buttons:
- label: "Read on Google AI blog"
path: https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html
- heading: "Fairness Indicators at Google I/O"
path: https://www.youtube.com/watch?v=6CwzDoE8J4M
youtube_id: 6CwzDoE8J4M?rel=0&show_info=0
buttons:
- label: "Watch the video"
path: https://www.youtube.com/watch?v=6CwzDoE8J4M
================================================
FILE: docs/guide/_toc.yaml
================================================
toc:
- title: Overview
path: /responsible_ai/fairness_indicators/guide/
- title: Thinking about fairness evaluation
path: /responsible_ai/fairness_indicators/guide/guidance
================================================
FILE: docs/guide/guidance.md
================================================
# Fairness Indicators: Thinking about Fairness Evaluation
Fairness Indicators is a useful tool for evaluating _binary_ and _multi-class_
classifiers for fairness. Eventually, we hope to expand this tool, in
partnership with all of you, to evaluate even more considerations.
Keep in mind that quantitative evaluation is only one part of evaluating a
broader user experience. Start by thinking about the different _contexts_
through which a user may experience your product. Who are the different types of
users your product is expected to serve? Who else may be affected by the
experience?
When considering AI's impact on people, it is important to always remember that
human societies are extremely complex! Understanding people, and their social
identities, social structures and cultural systems are each huge fields of open
research in their own right. Throw in the complexities of cross-cultural
differences around the globe, and getting even a foothold on understanding
societal impact can be challenging. Whenever possible, it is recommended you
consult with appropriate domain experts, which may include social scientists,
sociolinguists, and cultural anthropologists, as well as with members of the
populations on which technology will be deployed.
A single model, for example, the toxicity model that we leverage in the
[example colab](../../tutorials/Fairness_Indicators_Example_Colab),
can be used in many different contexts. A toxicity model deployed on a website
to filter offensive comments, for example, is a very different use case than the
model being deployed in an example web UI where users can type in a sentence and
see what score the model gives. Depending on the use case, and how users
experience the model prediction, your product will have different risks,
effects, and opportunities and you may want to evaluate for different fairness
concerns.
The questions above are the foundation of what ethical considerations, including
fairness, you may want to take into account when designing and developing your
ML-based product. These questions also motivate which metrics and which groups
of users you should use the tool to evaluate.
Before diving in further, here are three recommended resources for getting
started:
* **[The People + AI Guidebook](https://pair.withgoogle.com/) for
Human-centered AI design:** This guidebook is a great resource for the
questions and aspects to keep in mind when designing a machine-learning
based product. While we created this guidebook with designers in mind, many
of the principles will help answer questions like the one posed above.
* **[Our Fairness Lessons Learned](https://www.youtube.com/watch?v=6CwzDoE8J4M):**
This talk at Google I/O discusses lessons we have learned in our goal to
build and design inclusive products.
* **[ML Crash Course: Fairness](https://developers.google.com/machine-learning/crash-course/fairness/video-lecture):**
The ML Crash Course has a 70 minute section dedicated to identifying and
evaluating fairness concerns
So, why look at individual slices? Evaluation over individual slices is
important as strong overall metrics can obscure poor performance for certain
groups. Similarly, performing well for a certain metric (accuracy, AUC) doesn’t
always translate to acceptable performance for other metrics (false positive
rate, false negative rate) that are equally important in assessing opportunity
and harm for users.
The below sections will walk through some of the aspects to consider.
## Which groups should I slice by?
In general, a good practice is to slice by as many groups as may be affected by
your product, since you never know when performance might differ for one of the
other. However, if you aren’t sure, think about the different users who may be
engaging with your product, and how they might be affected. Consider,
especially, slices related to sensitive characteristics such as race, ethnicity,
gender, nationality, income, sexual orientation, and disability status.
**What if I don’t have data labeled for the slices I want to investigate?**
Good question. We know that many datasets don’t have ground-truth labels for
individual identity attributes.
If you find yourself in this position, we recommend a few approaches:
1. Identify if there _are_ attributes that you have that may give you some
insight into the performance across groups. For example, _geography_ while
not equivalent to ethnicity & race, may help you uncover any disparate
patterns in performance
1. Identify if there are representative public datasets that might map well to
your problem. You can find a range of diverse and inclusive datasets on the
[Google AI site](https://ai.google/responsibilities/responsible-ai-practices/?category=fairness),
which include
[Project Respect](https://www.blog.google/technology/ai/fairness-matters-promoting-pride-and-respect-ai/),
[Inclusive Images](https://www.kaggle.com/c/inclusive-images-challenge), and
[Open Images Extended](https://ai.google/tools/datasets/open-images-extended-crowdsourced/),
among others.
1. Leverage rules or classifiers, when relevant, to label your data with
objective surface-level attributes. For example, you can label text as to
whether or not there is an identity term _in_ the sentence. Keep in mind
that classifiers have their own challenges, and if you’re not careful, may
introduce another layer of bias as well. Be clear about what your classifier
is <span style="text-decoration:underline;">actually</span> classifying. For
example, an age classifier on images is in fact classifying _perceived age_.
Additionally, when possible, leverage surface-level attributes that _can_ be
objectively identified in the data. For example, it is ill-advised to build
an image classifier for race or ethnicity, because these are not visual
traits that can be defined in an image. A classifier would likely pick up on
proxies or stereotypes. Instead, building a classifier for skin tone may be
a more appropriate way to label and evaluate an image. Lastly, ensure high
accuracy for classifiers labeling such attributes.
1. Find more representative data that is labeled
**Always make sure to evaluate on multiple, diverse datasets.**
If your evaluation data is not adequately representative of your user base, or
the types of data likely to be encountered, you may end up with deceptively good
fairness metrics. Similarly, high model performance on one dataset doesn’t
guarantee high performance on others.
**Keep in mind subgroups aren’t always the best way to classify individuals.**
People are multidimensional and belong to more than one group, even within a
single dimension -- consider someone who is multiracial, or belongs to multiple
racial groups. Also, while overall metrics for a given racial group may look
equitable, particular interactions, such as race and gender together may show
unintended bias. Moreover, many subgroups have fuzzy boundaries which are
constantly being redrawn.
**When have I tested enough slices, and how do I know which slices to test?**
We acknowledge that there are a vast number of groups or slices that may be
relevant to test, and when possible, we recommend slicing and evaluating a
diverse and wide range of slices and then deep-diving where you spot
opportunities for improvement. It is also important to acknowledge that even
though you may not see concerns on slices you have tested, that doesn’t imply
that your product works for _all_ users, and getting diverse user feedback and
testing is important to ensure that you are continually identifying new
opportunities.
To get started, we recommend thinking through your particular use case and the
different ways users may engage with your product. How might different users
have different experiences? What does that mean for slices you should evaluate?
Collecting feedback from diverse users may also highlight potential slices to
prioritize.
## Which metrics should I choose?
When selecting which metrics to evaluate for your system, consider who will be
experiencing your model, how it will be experienced, and the effects of that
experience.
For example, how does your model give people more dignity or autonomy, or
positively impact their emotional, physical or financial wellbeing? In contrast,
how could your model’s predictions reduce people's dignity or autonomy, or
negatively impact their emotional, physical or financial wellbeing?
**In general, we recommend slicing _all your existing performance metrics as
good practice. We also recommend evaluating your metrics across
<span style="text-decoration:underline;">multiple thresholds</span>_** in order
to understand how the threshold can affect the performance for different groups.
In addition, if there is a predicted label which is uniformly "good" or “bad”,
then consider reporting (for each subgroup) the rate at which that label is
predicted. For example, a “good” label would be a label whose prediction grants
a person access to some resource, or enables them to perform some action.
## Critical fairness metrics for classification
When thinking about a classification model, think about the effects of _errors_
(the differences between the actual “ground truth” label, and the label from the
model). If some errors may pose more opportunity or harm to your users, make
sure you evaluate the rates of these errors across groups of users. These error
rates are defined below, in the metrics currently supported by the Fairness
Indicators beta.
**Over the course of the next year, we hope to release case studies of different
use cases and the metrics associated with these so that we can better highlight
when different metrics might be most appropriate.**
**Metrics available today in Fairness Indicators**
Note: There are many valuable fairness metrics that are not currently supported
in the Fairness Indicators beta. As we continue to add more metrics, we will
continue to add guidance for these metrics, here. Below, you can access
instructions to add your own metrics to Fairness Indicators. Additionally,
please reach out to [tfx@tensorflow.org](mailto:tfx@tensorflow.org) if there are
metrics that you would like to see. We hope to partner with you to build this
out further.
**Positive Rate / Negative Rate**
* _<span style="text-decoration:underline;">Definition:</span>_ The percentage
of data points that are classified as positive or negative, independent of
ground truth
* _<span style="text-decoration:underline;">Relates to:</span>_ Demographic
Parity and Equality of Outcomes, when equal across subgroups
* _<span style="text-decoration:underline;">When to use this metric:</span>_
Fairness use cases where having equal final percentages of groups is
important
**True Positive Rate / False Negative Rate**
* _<span style="text-decoration:underline;">Definition:</span>_ The percentage
of positive data points (as labeled in the ground truth) that are
_correctly_ classified as positive, or the percentage of positive data
points that are _incorrectly_ classified as negative
* _<span style="text-decoration:underline;">Relates to:</span>_ Equality of
Opportunity (for the positive class), when equal across subgroups
* _<span style="text-decoration:underline;">When to use this metric:</span>_
Fairness use cases where it is important that the same % of qualified
candidates are rated positive in each group. This is most commonly
recommended in cases of classifying positive outcomes, such as loan
applications, school admissions, or whether content is kid-friendly
**True Negative Rate / False Positive Rate**
* _<span style="text-decoration:underline;">Definition:</span>_ The percentage
of negative data points (as labeled in the ground truth) that are correctly
classified as negative, or the percentage of negative data points that are
incorrectly classified as positive
* _<span style="text-decoration:underline;">Relates to:</span>_ Equality of
Opportunity (for the negative class), when equal across subgroups
* _<span style="text-decoration:underline;">When to use this metric:</span>_
Fairness use cases where error rates (or misclassifying something as
positive) are more concerning than classifying the positives. This is most
common in abuse cases, where _positives_ often lead to negative actions.
These are also important for Facial Analysis Technologies such as face
detection or face attributes
Note: When both “positive” and “negative” mistakes are equally important, the
metric is called “equality of
<span style="text-decoration:underline;">odds</span>”. This can be measured by
evaluating and aiming for equality across both the TNR & FNR, or both the TPR &
FPR. For example, an app that counts how many cars go past a stop sign is
roughly equally bad whether or not it accidentally includes an extra car (a
false positive) or accidentally excludes a car (a false negative).
**Accuracy & AUC**
* _<span style="text-decoration:underline;">Relates to:</span>_ Predictive
Parity, when equal across subgroups
* _<span style="text-decoration:underline;">When to use these metrics:</span>_
Cases where precision of the task is most critical (not necessarily in a
given direction), such as face identification or face clustering
**False Discovery Rate**
* _<span style="text-decoration:underline;">Definition:</span>_ The percentage
of negative data points (as labeled in the ground truth) that are
incorrectly classified as positive out of all data points classified as
positive. This is also the inverse of PPV
* _<span style="text-decoration:underline;">Relates to:</span>_ Predictive
Parity (also known as Calibration), when equal across subgroups
* _<span style="text-decoration:underline;">When to use this metric:</span>_
Cases where the fraction of correct positive predictions should be equal
across subgroups
**False Omission Rate**
* _<span style="text-decoration:underline;">Definition:</span>_ The percentage
of positive data points (as labeled in the ground truth) that are
incorrectly classified as negative out of all data points classified as
negative. This is also the inverse of NPV
* _<span style="text-decoration:underline;">Relates to:</span>_ Predictive
Parity (also known as Calibration), when equal across subgroups
* _<span style="text-decoration:underline;">When to use this metric:</span>_
Cases where the fraction of correct negative predictions should be equal
across subgroups
Note: When used together, False Discovery Rate and False Omission Rate relate to
Conditional Use Accuracy Equality, when FDR and FOR are both equal across
subgroups. FDR and FOR are also similar to FPR and FNR, where FDR/FOR compare
FP/FN to predicted negative/positive data points, and FPR/FNR compare FP/FN to
ground truth negative/positive data points. FDR/FOR can be used instead of
FPR/FNR when predictive parity is more critical than equality of opportunity.
**Overall Flip Rate / Positive to Negative Prediction Flip Rate / Negative to
Positive Prediction Flip Rate**
* *<span style="text-decoration:underline;">Definition:</span>* The
probability that the classifier gives a different prediction if the identity
attribute in a given feature were changed.
* *<span style="text-decoration:underline;">Relates to:</span>* Counterfactual
fairness
* *<span style="text-decoration:underline;">When to use this metric:</span>*
When determining whether the model’s prediction changes when the sensitive
attributes referenced in the example is removed or replaced. If it does,
consider using the Counterfactual Logit Pairing technique within the
Tensorflow Model Remediation library.
**Flip Count / Positive to Negative Prediction Flip Count / Negative to Positive
Prediction Flip Count** *
* *<span style="text-decoration:underline;">Definition:</span>* The number of
times the classifier gives a different prediction if the identity term in a
given example were changed.
* *<span style="text-decoration:underline;">Relates to:</span>* Counterfactual
fairness
* *<span style="text-decoration:underline;">When to use this metric:</span>*
When determining whether the model’s prediction changes when the sensitive
attributes referenced in the example is removed or replaced. If it does,
consider using the Counterfactual Logit Pairing technique within the
Tensorflow Model Remediation library.
**Examples of which metrics to select**
* _Systematically failing to detect faces in a camera app can lead to a
negative user experience for certain user groups._ In this case, false
negatives in a face detection system may lead to product failure, while a
false positive (detecting a face when there isn’t one) may pose a slight
annoyance to the user. Thus, evaluating and minimizing the false negative
rate is important for this use case.
* _Unfairly marking text comments from certain people as “spam” or “high
toxicity” in a moderation system leads to certain voices being silenced._ On
one hand, a high false positive rate leads to unfair censorship. On the
other, a high false negative rate could lead to a proliferation of toxic
content from certain groups, which may both harm the user and constitute a
representational harm for those groups. Thus, both metrics are important to
consider, in addition to metrics which take into account all types of errors
such as accuracy or AUC.
**Don’t see the metrics you’re looking for?**
Follow the documentation
[here](https://tensorflow.github.io/model-analysis/post_export_metrics/)
to add you own custom metric.
## Final notes
**A gap in metric between two groups can be a sign that your model may have
unfair skews**. You should interpret your results according to your use case.
However, the first sign that you may be treating one set of users _unfairly_ is
when the metrics between that set of users and your overall are significantly
different. Make sure to account for confidence intervals when looking at these
differences. When you have too few samples in a particular slice, the difference
between metrics may not be accurate.
**Achieving equality across groups on Fairness Indicators doesn’t mean the model
is fair.** Systems are highly complex, and achieving equality on one (or even
all) of the provided metrics can’t guarantee Fairness.
**Fairness evaluations should be run throughout the development process and
post-launch (not the day before launch).** Just like improving your product is
an ongoing process and subject to adjustment based on user and market feedback,
making your product fair and equitable requires ongoing attention. As different
aspects of the model changes, such as training data, inputs from other models,
or the design itself, fairness metrics are likely to change. “Clearing the bar”
once isn’t enough to ensure that all of the interacting components have remained
intact over time.
**Adversarial testing should be performed for rare, malicious examples.**
Fairness evaluations aren’t meant to replace adversarial testing. Additional
defense against rare, targeted examples is crucial as these examples probably
will not manifest in training or evaluation data.
================================================
FILE: docs/index.md
================================================
# Fairness Indicators
/// html | div[style='float: left; width: 50%;']
Fairness Indicators is a library that enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool suite, you can:
- Compute commonly-identified fairness metrics for classification models
- Compare model performance across subgroups to a baseline, or to other models
- Use confidence intervals to surface statistically significant disparities
- Perform evaluation over multiple thresholds
Use Fairness Indicators via the:
- [Evaluator component](https://tensorflow.github.io/tfx/guide/evaluator/) in a [TFX pipeline](https://tensorflow.github.io/tfx/)
- [TensorBoard plugin](https://github.com/tensorflow/tensorboard/blob/master/docs/fairness-indicators.md)
- [TensorFlow Model Analysis library](https://tensorflow.github.io/tfx/guide/fairness_indicators/)
- [Model Agnostic TFMA library](https://tensorflow.github.io/tfx/guide/fairness_indicators/#using-fairness-indicators-with-non-tensorflow-models)
<!-- TODO: Change the TFMA link when the new docs are deployed -->
///
/// html | div[style='float: right;width: 50%;']
```python
eval_config_pbtxt = """
model_specs {
label_key: "%s"
}
metrics_specs {
metrics {
class_name: "FairnessIndicators"
config: '{ "thresholds": [0.25, 0.5, 0.75] }'
}
metrics {
class_name: "ExampleCount"
}
}
slicing_specs {}
slicing_specs {
feature_keys: "%s"
}
options {
compute_confidence_intervals { value: False }
disabled_outputs{values: "analysis"}
}
""" % (LABEL_KEY, GROUP_KEY)
```
///
/// html | div[style='clear: both;']
///
<div class="grid cards" markdown>
- 
### [ML Practicum: Fairness in Perspective API using Fairness Indicators](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)
---
[Try the Case Study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)
- 
### [Fairness Indicators on the TensorFlow blog](https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html)
---
[Read on the TensorFlow blog](https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html)
- 
### [Fairness Indicators on GitHub](https://github.com/tensorflow/fairness-indicators)
---
[View on GitHub](https://github.com/tensorflow/fairness-indicators)
- 
### [Fairness Indicators on the Google AI Blog](https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html)
---
[Read on Google AI blog](https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html)
- <iframe width="560" height="315" src="https://www.youtube.com/embed/6CwzDoE8J4M?si=gIL2KHdj96_SxdVH" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
### [Fairness Indicators at Google I/O](https://www.youtube.com/watch?v=6CwzDoE8J4M)
---
[Watch the video](https://www.youtube.com/watch?v=6CwzDoE8J4M)
</div>
================================================
FILE: docs/javascripts/mathjax.js
================================================
window.MathJax = {
tex: {
inlineMath: [["\\(", "\\)"]],
displayMath: [["\\[", "\\]"]],
processEscapes: true,
processEnvironments: true
},
options: {
ignoreHtmlClass: ".*|",
processHtmlClass: "arithmatex"
}
};
document$.subscribe(() => {
MathJax.startup.output.clearCache()
MathJax.typesetClear()
MathJax.texReset()
MathJax.typesetPromise()
})
================================================
FILE: docs/stylesheets/extra.css
================================================
:root {
--md-primary-fg-color: #FFA800;
--md-primary-fg-color--light: #CCCCCC;
--md-primary-fg-color--dark: #425066;
}
.video-wrapper {
max-width: 240px;
display: flex;
flex-direction: row;
}
.video-wrapper > iframe {
width: 100%;
aspect-ratio: 16 / 9;
}
.buttons-wrapper {
flex-wrap: wrap;
gap: 1em;
display: flex;
/* flex-grow: 1; */
/* justify-content: center; */
/* align-content: center; */
}
.buttons-wrapper > a {
justify-content: center;
align-content: center;
flex-wrap: nowrap;
/* gap: 1em; */
align-items: center;
text-align: center;
flex: 1 1 30%;
display: flex;
}
.md-button > .buttons-content {
align-items: center;
justify-content: center;
display: flex;
gap: 1em;
}
================================================
FILE: docs/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Sxt-9qpNgPxo"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Phnw6c3-gQ1f"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aalPefrUUplk"
},
"source": [
"# FaceSSD Fairness Indicators Example Colab"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KFRBcGOYgEAI"
},
"source": [
"<div class=\"buttons-wrapper\">\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://tensorflow.github.io/fairness-indicators/tutorials/Facessd_Fairness_Indicators_Example_Colab\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">\n",
" View on TensorFlow.org\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/colab_logo_32px.png\">\n",
" Run in Google Colab\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img width=\"32px\" src=\n",
"\t \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">\n",
" View source on GitHub\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" href=\n",
" \"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/download_logo_32px.png\">\n",
" Download notebook\n",
" </div>\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UZ48WFLwbCL6"
},
"source": [
"##Overview\n",
"\n",
"In this activity, you'll use [Fairness Indicators](https://tensorflow.github.io/fairness-indicators) to explore the [FaceSSD predictions on Labeled Faces in the Wild dataset](https://modelcards.withgoogle.com/face-detection). Fairness Indicators is a suite of tools built on top of [TensorFlow Model Analysis](https://tensorflow.github.io/model-analysis/get_started) that enable regular evaluation of fairness metrics in product pipelines.\n",
"\n",
"##About the Dataset\n",
"\n",
"In this exercise, you'll work with the FaceSSD prediction dataset, approximately 200k different image predictions and groundtruths generated by FaceSSD API.\n",
"\n",
"##About the Tools\n",
"\n",
"[TensorFlow Model Analysis](https://tensorflow.github.io/model_analysis/get_started) is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows users to evaluate their models on large amounts of data in a distributed manner, computing in-graph and other metrics over different slices of data and visualize in notebooks.\n",
"\n",
"[TensorFlow Data Validation](https://tensorflow.github.io/data-validation/get_started) is one tool you can use to analyze your data. You can use it to find potential problems in your data, such as missing values and data imbalances, that can lead to Fairness disparities.\n",
"\n",
"With [Fairness Indicators](https://tensorflow.github.io/fairness-indicators/), users will be able to: \n",
"\n",
"* Evaluate model performance, sliced across defined groups of users\n",
"* Feel confident about results with confidence intervals and evaluations at multiple thresholds"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u33JXdluZ2lG"
},
"source": [
"# Importing\n",
"\n",
"Run the following code to install the fairness_indicators library. This package contains the tools we'll be using in this exercise. Restart Runtime may be requested but is not necessary."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EoRNffG599XP"
},
"outputs": [],
"source": [
"!pip install apache_beam\n",
"!pip install fairness-indicators\n",
"!pip install witwidget\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B8dlyTyiTe-9"
},
"outputs": [],
"source": [
"import os\n",
"import tempfile\n",
"import apache_beam as beam\n",
"import numpy as np\n",
"import pandas as pd\n",
"from datetime import datetime\n",
"\n",
"import tensorflow_hub as hub\n",
"import tensorflow as tf\n",
"import tensorflow_model_analysis as tfma\n",
"import tensorflow_data_validation as tfdv\n",
"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\n",
"from tensorflow_model_analysis.addons.fairness.view import widget_view\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor\n",
"\n",
"from witwidget.notebook.visualization import WitConfigBuilder\n",
"from witwidget.notebook.visualization import WitWidget"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TsplOJGqWCf5"
},
"source": [
"# Download and Understand the Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vFOQ4AaIcAn2"
},
"source": [
"[Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/) is a public benchmark dataset for face verification, also known as pair matching. LFW contains more than 13,000 images of faces collected from the web.\n",
"\n",
"We ran FaceSSD predictions on this dataset to predict whether a face is present in a given image. In this Colab, we will slice data according to gender to observe if there are any significant differences between model performance for different gender groups.\n",
"\n",
"If there is more than one face in an image, gender is labeled as \"MISSING\".\n",
"\n",
"We've hosted the dataset on Google Cloud Platform for convenience. Run the following code to download the data from GCP, the data will take about a minute to download and analyze."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NdLBi6tN5i7I"
},
"outputs": [],
"source": [
"data_location = tf.keras.utils.get_file('lfw_dataset.tf', 'https://storage.googleapis.com/facessd_dataset/lfw_dataset.tfrecord')\n",
"\n",
"stats = tfdv.generate_statistics_from_tfrecord(data_location=data_location)\n",
"tfdv.visualize_statistics(stats)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cNODEwE5x7Uo"
},
"source": [
"# Defining Constants"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZF4NO87uFxdQ"
},
"outputs": [],
"source": [
"BASE_DIR = tempfile.gettempdir()\n",
"\n",
"tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\n",
"\n",
"compute_confidence_intervals = True\n",
"\n",
"slice_key = 'object/groundtruth/Gender'\n",
"label_key = 'object/groundtruth/face'\n",
"prediction_key = 'object/prediction/face'\n",
"\n",
"feature_map = {\n",
" slice_key:\n",
" tf.io.FixedLenFeature([], tf.string, default_value=['none']),\n",
" label_key:\n",
" tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]),\n",
" prediction_key:\n",
" tf.io.FixedLenFeature([], tf.float32, default_value=[0.0]),\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gVLHwuhEyI8R"
},
"source": [
"# Model Agnostic Config for TFMA"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ej1nGCZSyJIK"
},
"outputs": [],
"source": [
"model_agnostic_config = agnostic_predict.ModelAgnosticConfig(\n",
" label_keys=[label_key],\n",
" prediction_keys=[prediction_key],\n",
" feature_spec=feature_map)\n",
"\n",
"model_agnostic_extractors = [\n",
" model_agnostic_extractor.ModelAgnosticExtractor(\n",
" model_agnostic_config=model_agnostic_config, desired_batch_size=3),\n",
" tfma.extractors.slice_key_extractor.SliceKeyExtractor(\n",
" [tfma.slicer.SingleSliceSpec(),\n",
" tfma.slicer.SingleSliceSpec(columns=[slice_key])])\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wqkk9SkvyVkR"
},
"source": [
"# Fairness Callbacks and Computing Fairness Metrics"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A0icrlliBCOb"
},
"outputs": [],
"source": [
"# Helper class for counting examples in beam PCollection\n",
"class CountExamples(beam.CombineFn):\n",
" def __init__(self, message):\n",
" self.message = message\n",
"\n",
" def create_accumulator(self):\n",
" return 0\n",
"\n",
" def add_input(self, current_sum, element):\n",
" return current_sum + 1\n",
"\n",
" def merge_accumulators(self, accumulators): \n",
" return sum(accumulators)\n",
"\n",
" def extract_output(self, final_sum):\n",
" if final_sum:\n",
" print(\"%s: %d\"%(self.message, final_sum))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mRQjdjp9yVv2"
},
"outputs": [],
"source": [
"metrics_callbacks = [\n",
" tfma.post_export_metrics.fairness_indicators(\n",
" thresholds=[0.1, 0.3, 0.5, 0.7, 0.9],\n",
" labels_key=label_key,\n",
" target_prediction_keys=[prediction_key]),\n",
" tfma.post_export_metrics.auc(\n",
" curve='PR',\n",
" labels_key=label_key,\n",
" target_prediction_keys=[prediction_key]),\n",
"]\n",
"\n",
"eval_shared_model = tfma.types.EvalSharedModel(\n",
" add_metrics_callbacks=metrics_callbacks,\n",
" construct_fn=model_agnostic_evaluate_graph.make_construct_fn(\n",
" add_metrics_callbacks=metrics_callbacks,\n",
" config=model_agnostic_config))\n",
"\n",
"with beam.Pipeline() as pipeline:\n",
" # Read data.\n",
" data = (\n",
" pipeline\n",
" | 'ReadData' >> beam.io.ReadFromTFRecord(data_location))\n",
"\n",
" # Count all examples.\n",
" data_count = (\n",
" data | 'Count number of examples' >> beam.CombineGlobally(\n",
" CountExamples('Before filtering \"Gender:MISSING\"')))\n",
"\n",
" # If there are more than one face in image, the gender feature is 'MISSING'\n",
" # and we are filtering that image out.\n",
" def filter_missing_gender(element):\n",
" example = tf.train.Example.FromString(element)\n",
" if example.features.feature[slice_key].bytes_list.value[0] != b'MISSING':\n",
" yield element\n",
"\n",
" filtered_data = (\n",
" data\n",
" | 'Filter Missing Gender' >> beam.ParDo(filter_missing_gender))\n",
"\n",
" # Count after filtering \"Gender:MISSING\".\n",
" filtered_data_count = (\n",
" filtered_data | 'Count number of examples after filtering'\n",
" >> beam.CombineGlobally(\n",
" CountExamples('After filtering \"Gender:MISSING\"')))\n",
"\n",
" # Because LFW data set has always faces by default, we are adding\n",
" # labels as 1.0 for all images.\n",
" def add_face_groundtruth(element):\n",
" example = tf.train.Example.FromString(element)\n",
" example.features.feature[label_key].float_list.value[:] = [1.0]\n",
" yield example.SerializeToString()\n",
"\n",
" final_data = (\n",
" filtered_data\n",
" | 'Add Face Groundtruth' >> beam.ParDo(add_face_groundtruth))\n",
"\n",
" # Run TFMA.\n",
" _ = (\n",
" final_data\n",
" | 'ExtractEvaluateAndWriteResults' >>\n",
" tfma.ExtractEvaluateAndWriteResults(\n",
" eval_shared_model=eval_shared_model,\n",
" compute_confidence_intervals=compute_confidence_intervals,\n",
" output_path=tfma_eval_result_path,\n",
" extractors=model_agnostic_extractors))\n",
"\n",
"eval_result = tfma.load_eval_result(output_path=tfma_eval_result_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ktlASJQIzE3l"
},
"source": [
"# Render Fairness Indicators\n",
"\n",
"Render the Fairness Indicators widget with the exported evaluation results.\n",
"\n",
"Below you will see bar charts displaying performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the drop down menus at the top of the visualization.\n",
"\n",
"A relevant metric for this use case is true positive rate, also known as recall. Use the selector on the left hand side to choose the graph for true_positive_rate. These metric values match the values displayed on the [model card](https://modelcards.withgoogle.com/face-detection).\n",
"\n",
"For some photos, gender is labeled as young instead of male or female, if the person in the photo is too young to be accurately annotated."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JNaNhTCTAMHm"
},
"outputs": [],
"source": [
"widget_view.render_fairness_indicator(eval_result=eval_result,\n",
" slicing_column=slice_key)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [
"Sxt-9qpNgPxo"
],
"name": "Facessd Fairness Indicators Example Colab.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.22"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: docs/tutorials/Fairness_Indicators_Example_Colab.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Tce3stUlHN0L"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tuOe1ymfHZPu"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aalPefrUUplk"
},
"source": [
"# Introduction to Fairness Indicators"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MfBg1C5NB3X0"
},
"source": [
"<div class=\"buttons-wrapper\">\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://tensorflow.github.io/fairness-indicators/tutorials/Fairness_Indicators_Example_Colab\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">\n",
" View on TensorFlow.org\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/colab_logo_32px.png\">\n",
" Run in Google Colab\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://github.com/tensorflow/fairness-indicators/blob/master/docs/tutorials/Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img width=\"32px\" src=\n",
"\t \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">\n",
" View source on GitHub\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" href=\n",
" \"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/download_logo_32px.png\">\n",
" Download notebook\n",
" </div>\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YWcPbUNg1yez"
},
"source": [
"## Overview\n",
"\n",
"Fairness Indicators is a suite of tools built on top of [TensorFlow Model Analysis (TFMA)](https://tensorflow.github.io/model-analysis/get_started) that enable regular evaluation of fairness metrics in product pipelines. TFMA is a library for evaluating both TensorFlow and non-TensorFlow machine learning models. It allows you to evaluate your models on large amounts of data in a distributed manner, compute in-graph and other metrics over different slices of data, and visualize them in notebooks. \n",
"\n",
"Fairness Indicators is packaged with [TensorFlow Data Validation (TFDV)](https://tensorflow.github.io/data-validation/get_started) and the [What-If Tool](https://pair-code.github.io/what-if-tool/). Using Fairness Indicators allows you to: \n",
"\n",
"* Evaluate model performance, sliced across defined groups of users\n",
"* Gain confidence about results with confidence intervals and evaluations at multiple thresholds\n",
"* Evaluate the distribution of datasets\n",
"* Dive deep into individual slices to explore root causes and opportunities for improvement\n",
"\n",
"In this notebook, you will use Fairness Indicators to fix fairness issues in a model you train using the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification). Watch this [video](https://www.youtube.com/watch?v=pHT-ImFXPQo) for more details and context on the real-world scenario this is based on which is also one of primary motivations for creating Fairness Indicators."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GjuCFktB2IJW"
},
"source": [
"## Dataset\n",
"\n",
"In this notebook, you will work with the [Civil Comments dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification), approximately 2 million public comments made public by the [Civil Comments platform](https://medium.com/@aja_15265/saying-goodbye-to-civil-comments-41859d3a2b1d) in 2017 for ongoing research. This effort was sponsored by [Jigsaw](https://jigsaw.google.com/), who have hosted competitions on Kaggle to help classify toxic comments as well as minimize unintended model bias.\n",
"\n",
"Each individual text comment in the dataset has a toxicity label, with the label being 1 if the comment is toxic and 0 if the comment is non-toxic. Within the data, a subset of comments are labeled with a variety of identity attributes, including categories for gender, sexual orientation, religion, and race or ethnicity."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u33JXdluZ2lG"
},
"source": [
"## Setup\n",
"\n",
"Install `fairness-indicators` and `witwidget`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EoRNffG599XP"
},
"outputs": [],
"source": [
"!pip install -q -U pip==20.2\n",
"\n",
"!pip install -q fairness-indicators\n",
"!pip install -q witwidget"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "alYUSbyv59j5"
},
"source": [
"You must restart the Colab runtime after installing. Select **Runtime > Restart** runtime from the Colab menu.\n",
"\n",
"Do not proceed with the rest of this tutorial without first restarting the runtime."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RbRUqXDm6f1N"
},
"source": [
"Import all other required libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B8dlyTyiTe-9"
},
"outputs": [],
"source": [
"import os\n",
"import tempfile\n",
"import apache_beam as beam\n",
"import numpy as np\n",
"import pandas as pd\n",
"from datetime import datetime\n",
"import pprint\n",
"\n",
"from google.protobuf import text_format\n",
"\n",
"import tensorflow_hub as hub\n",
"import tensorflow as tf\n",
"import tensorflow_model_analysis as tfma\n",
"import tensorflow_data_validation as tfdv\n",
"\n",
"from tfx_bsl.tfxio import tensor_adapter\n",
"from tfx_bsl.tfxio import tf_example_record\n",
"\n",
"from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\n",
"from tensorflow_model_analysis.addons.fairness.view import widget_view\n",
"\n",
"from fairness_indicators.tutorial_utils import util\n",
"\n",
"from witwidget.notebook.visualization import WitConfigBuilder\n",
"from witwidget.notebook.visualization import WitWidget\n",
"\n",
"from tensorflow_metadata.proto.v0 import schema_pb2"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TsplOJGqWCf5"
},
"source": [
"## Download and analyze the data\n",
"\n",
"By default, this notebook downloads a preprocessed version of this dataset, but you may use the original dataset and re-run the processing steps if desired. In the original dataset, each comment is labeled with the percentage of raters who believed that a comment corresponds to a particular identity. For example, a comment might be labeled with the following: { male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8, homosexual_gay_or_lesbian: 1.0 } The processing step groups identity by category (gender, sexual_orientation, etc.) and removes identities with a score less than 0.5. So the example above would be converted to the following: of raters who believed that a comment corresponds to a particular identity. For example, the comment would be labeled with the following: { gender: [female], sexual_orientation: [heterosexual, homosexual_gay_or_lesbian] }"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qmt4gkBFRBD2"
},
"outputs": [],
"source": [
"download_original_data = False #@param {type:\"boolean\"}\n",
"\n",
"if download_original_data:\n",
" train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',\n",
" 'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')\n",
" validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',\n",
" 'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')\n",
"\n",
" # The identity terms list will be grouped together by their categories\n",
" # (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column,\n",
" # text column and label column will be kept after processing.\n",
" train_tf_file = util.convert_comments_data(train_tf_file)\n",
" validate_tf_file = util.convert_comments_data(validate_tf_file)\n",
"\n",
"else:\n",
" train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',\n",
" 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')\n",
" validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',\n",
" 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "vFOQ4AaIcAn2"
},
"source": [
"Use TFDV to analyze the data and find potential problems in it, such as missing values and data imbalances, that can lead to fairness disparities."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NdLBi6tN5i7I"
},
"outputs": [],
"source": [
"stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file)\n",
"tfdv.visualize_statistics(stats)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AS9QiA96GXDE"
},
"source": [
"TFDV shows that there are some significant imbalances in the data which could lead to biased model outcomes. \n",
"\n",
"* The toxicity label (the value predicted by the model) is unbalanced. Only 8% of the examples in the training set are toxic, which means that a classifier could get 92% accuracy by predicting that all comments are non-toxic.\n",
"\n",
"* In the fields relating to identity terms, only 6.6k out of the 1.08 million (0.61%) training examples deal with homosexuality, and those related to bisexuality are even more rare. This indicates that performance on these slices may suffer due to lack of training data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9ekzb7vVnPCc"
},
"source": [
"## Prepare the data\n",
"\n",
"Define a feature map to parse the data. Each example will have a label, comment text, and identity features `sexual orientation`, `gender`, `religion`, `race`, and `disability` that are associated with the text."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "n4_nXQDykX6W"
},
"outputs": [],
"source": [
"BASE_DIR = tempfile.gettempdir()\n",
"\n",
"TEXT_FEATURE = 'comment_text'\n",
"LABEL = 'toxicity'\n",
"FEATURE_MAP = {\n",
" # Label:\n",
" LABEL: tf.io.FixedLenFeature([], tf.float32),\n",
" # Text:\n",
" TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\n",
"\n",
" # Identities:\n",
" 'sexual_orientation':tf.io.VarLenFeature(tf.string),\n",
" 'gender':tf.io.VarLenFeature(tf.string),\n",
" 'religion':tf.io.VarLenFeature(tf.string),\n",
" 'race':tf.io.VarLenFeature(tf.string),\n",
" 'disability':tf.io.VarLenFeature(tf.string),\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1B1ROCM__y8C"
},
"source": [
"Next, set up an input function to feed data into the model. Add a weight column to each example and upweight the toxic examples to account for the class imbalance identified by the TFDV. Use only identity features during the evaluation phase, as only the comments are fed into the model during training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YwoC-dzEDid3"
},
"outputs": [],
"source": [
"def train_input_fn():\n",
" def parse_function(serialized):\n",
" parsed_example = tf.io.parse_single_example(\n",
" serialized=serialized, features=FEATURE_MAP)\n",
" # Adds a weight column to deal with unbalanced classes.\n",
" parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1)\n",
" return (parsed_example,\n",
" parsed_example[LABEL])\n",
" train_dataset = tf.data.TFRecordDataset(\n",
" filenames=[train_tf_file]).map(parse_function).batch(512)\n",
" return train_dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mfbgerCsEOmN"
},
"source": [
"## Train the model\n",
"\n",
"Create and train a deep learning model on the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JaGvNrVijfws"
},
"outputs": [],
"source": [
"model_dir = os.path.join(BASE_DIR, 'train', datetime.now().strftime(\n",
" \"%Y%m%d-%H%M%S\"))\n",
"\n",
"embedded_text_feature_column = hub.text_embedding_column(\n",
" key=TEXT_FEATURE,\n",
" module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')\n",
"\n",
"classifier = tf.estimator.DNNClassifier(\n",
" hidden_units=[500, 100],\n",
" weight_column='weight',\n",
" feature_columns=[embedded_text_feature_column],\n",
" optimizer=tf.keras.optimizers.legacy.Adagrad(learning_rate=0.003),\n",
" loss_reduction=tf.losses.Reduction.SUM,\n",
" n_classes=2,\n",
" model_dir=model_dir)\n",
"\n",
"classifier.train(input_fn=train_input_fn, steps=1000)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jTPqije9Eg5b"
},
"source": [
"## Analyze the model\n",
"\n",
"After obtaining the trained model, analyze it to compute fairness metrics using TFMA and Fairness Indicators. Begin by exporting the model as a [SavedModel](https://www.tensorflow.org/guide/saved_model). "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-vRc-Jyp8dRm"
},
"source": [
"### Export SavedModel"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QLjiy5VCzlRw"
},
"outputs": [],
"source": [
"def eval_input_receiver_fn():\n",
" serialized_tf_example = tf.compat.v1.placeholder(\n",
" dtype=tf.string, shape=[None], name='input_example_placeholder')\n",
"\n",
" # This *must* be a dictionary containing a single key 'examples', which\n",
" # points to the input placeholder.\n",
" receiver_tensors = {'examples': serialized_tf_example}\n",
"\n",
" features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP)\n",
" features['weight'] = tf.ones_like(features[LABEL])\n",
"\n",
" return tfma.export.EvalInputReceiver(\n",
" features=features,\n",
" receiver_tensors=receiver_tensors,\n",
" labels=features[LABEL])\n",
"\n",
"tfma_export_dir = tfma.export.export_eval_savedmodel(\n",
" estimator=classifier,\n",
" export_dir_base=os.path.join(BASE_DIR, 'tfma_eval_model'),\n",
" eval_input_receiver_fn=eval_input_receiver_fn)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3j8ODcee8rQ8"
},
"source": [
"### Compute Fairness Metrics\n",
"\n",
"Select the identity to compute metrics for and whether to run with confidence intervals using the dropdown in the panel on the right."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7shDmJbx9mqa"
},
"outputs": [],
"source": [
"#@title Fairness Indicators Computation Options\n",
"tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\n",
"\n",
"#@markdown Modify the slice_selection for experiments on other identities.\n",
"slice_selection = 'sexual_orientation' #@param [\"sexual_orientation\", \"gender\", \"religion\", \"race\", \"disability\"]\n",
"print(f'Slice selection: {slice_selection}')\n",
"#@markdown Confidence Intervals can help you make better decisions regarding your data, but as it requires computing multiple resamples, is slower particularly in the colab environment that cannot take advantage of parallelization.\n",
"compute_confidence_intervals = False #@param {type:\"boolean\"}\n",
"print(f'Compute confidence intervals: {compute_confidence_intervals}')\n",
"\n",
"# Define slices that you want the evaluation to run on.\n",
"eval_config_pbtxt = \"\"\"\n",
" model_specs {\n",
" label_key: \"%s\"\n",
" }\n",
" metrics_specs {\n",
" metrics {\n",
" class_name: \"FairnessIndicators\"\n",
" config: '{ \"thresholds\": [0.1, 0.3, 0.5, 0.7, 0.9] }'\n",
" }\n",
" }\n",
" slicing_specs {} # overall slice\n",
" slicing_specs {\n",
" feature_keys: [\"%s\"]\n",
" }\n",
" options {\n",
" compute_confidence_intervals { value: %s }\n",
" disabled_outputs { values: \"analysis\" }\n",
" }\n",
" \"\"\" % (LABEL, slice_selection, compute_confidence_intervals)\n",
"eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())\n",
"eval_shared_model = tfma.default_eval_shared_model(\n",
" eval_saved_model_path=tfma_export_dir)\n",
"\n",
"schema = text_format.Parse(\n",
" \"\"\"\n",
" tensor_representation_group {\n",
" key: \"\"\n",
" value {\n",
" tensor_representation {\n",
" key: \"comment_text\"\n",
" value {\n",
" dense_tensor {\n",
" column_name: \"comment_text\"\n",
" shape {}\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" feature {\n",
" name: \"comment_text\"\n",
" type: BYTES\n",
" }\n",
" feature {\n",
" name: \"toxicity\"\n",
" type: FLOAT\n",
" }\n",
" feature {\n",
" name: \"sexual_orientation\"\n",
" type: BYTES\n",
" }\n",
" feature {\n",
" name: \"gender\"\n",
" type: BYTES\n",
" }\n",
" feature {\n",
" name: \"religion\"\n",
" type: BYTES\n",
" }\n",
" feature {\n",
" name: \"race\"\n",
" type: BYTES\n",
" }\n",
" feature {\n",
" name: \"disability\"\n",
" type: BYTES\n",
" }\n",
" \"\"\", schema_pb2.Schema())\n",
"tfxio = tf_example_record.TFExampleRecord(\n",
" file_pattern=validate_tf_file,\n",
" schema=schema,\n",
" raw_record_column_name=tfma.ARROW_INPUT_COLUMN)\n",
"tensor_adapter_config = tensor_adapter.TensorAdapterConfig(\n",
" arrow_schema=tfxio.ArrowSchema(),\n",
" tensor_representations=tfxio.TensorRepresentations())\n",
"\n",
"with beam.Pipeline() as pipeline:\n",
" (pipeline\n",
" | 'ReadFromTFRecordToArrow' >> tfxio.BeamSource()\n",
" | 'ExtractEvaluateAndWriteResults' >> tfma.ExtractEvaluateAndWriteResults(\n",
" eval_config=eval_config,\n",
" eval_shared_model=eval_shared_model,\n",
" output_path=tfma_eval_result_path,\n",
" tensor_adapter_config=tensor_adapter_config))\n",
"\n",
"eval_result = tfma.load_eval_result(output_path=tfma_eval_result_path)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jtDpTBPeRw2d"
},
"source": [
"### Visualize data using the What-if Tool\n",
"\n",
"In this section, you'll use the What-If Tool's interactive visual interface to explore and manipulate data at a micro-level.\n",
"\n",
"Each point on the scatter plot on the right-hand panel represents one of the examples in the subset loaded into the tool. Click on one of the points to see details about this particular example in the left-hand panel. The comment text, ground truth toxicity, and applicable identities are shown. At the bottom of this left-hand panel, you see the inference results from the model you just trained.\n",
"\n",
"Modify the text of the example and then click the **Run inference** button to view how your changes caused the perceived toxicity prediction to change."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wtjZo4BDlV1m"
},
"outputs": [],
"source": [
"DEFAULT_MAX_EXAMPLES = 1000\n",
"\n",
"# Load 100000 examples in memory. When first rendered, \n",
"# What-If Tool should only display 1000 of these due to browser constraints.\n",
"def wit_dataset(file, num_examples=100000):\n",
" dataset = tf.data.TFRecordDataset(\n",
" filenames=[file]).take(num_examples)\n",
" return [tf.train.Example.FromString(d.numpy()) for d in dataset]\n",
"\n",
"wit_data = wit_dataset(train_tf_file)\n",
"config_builder = WitConfigBuilder(wit_data[:DEFAULT_MAX_EXAMPLES]).set_estimator_and_feature_spec(\n",
" classifier, FEATURE_MAP).set_label_vocab(['non-toxicity', LABEL]).set_target_feature(LABEL)\n",
"wit = WitWidget(config_builder)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ktlASJQIzE3l"
},
"source": [
"## Render Fairness Indicators\n",
"\n",
"Render the Fairness Indicators widget with the exported evaluation results.\n",
"\n",
"Below you will see bar charts displaying performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the dropdown menus at the top of the visualization. \n",
"\n",
"The Fairness Indicator widget is integrated with the What-If Tool rendered above. If you select one slice of the data in the bar chart, the What-If Tool will update to show you examples from the selected slice. When the data reloads in the What-If Tool above, try modifying **Color By** to **toxicity**. This can give you a visual understanding of the toxicity balance of examples by slice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "JNaNhTCTAMHm"
},
"outputs": [],
"source": [
"event_handlers={'slice-selected':\n",
" wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)}\n",
"widget_view.render_fairness_indicator(eval_result=eval_result,\n",
" slicing_column=slice_selection,\n",
" event_handlers=event_handlers\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nRuZsLr6V_fY"
},
"source": [
"With this particular dataset and task, systematically higher false positive and false negative rates for certain identities can lead to negative consequences. For example, in a content moderation system, a higher-than-overall false positive rate for a certain group can lead to those voices being silenced. Thus, it is important to regularly evaluate these types of criteria as you develop and improve models, and utilize tools such as Fairness Indicators, TFDV, and WIT to help illuminate potential problems. Once you've identified fairness issues, you can experiment with new data sources, data balancing, or other techniques to improve performance on underperforming groups.\n",
"\n",
"See [here](../../guide/guidance) for more information and guidance on how to use Fairness Indicators.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wCMEMtGfx0Ti"
},
"source": [
"## Use fairness evaluation results\n",
"\n",
"The [`eval_result`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult) object, rendered above in `render_fairness_indicator()`, has its own API that you can leverage to read TFMA results into your programs."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "z6stkMLwyfza"
},
"source": [
"### Get evaluated slices and metrics\n",
"\n",
"Use [`get_slice_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_slice_names) and [`get_metric_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metric_names) to get the evaluated slices and metrics, respectively."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eXrt7SdZyzWD"
},
"outputs": [],
"source": [
"pp = pprint.PrettyPrinter()\n",
"\n",
"print(\"Slices:\")\n",
"pp.pprint(eval_result.get_slice_names())\n",
"print(\"\\nMetrics:\")\n",
"pp.pprint(eval_result.get_metric_names())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ctAvudY2zUu4"
},
"source": [
"Use [`get_metrics_for_slice()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResultget_metrics_for_slice) to get the metrics for a particular slice as a dictionary mapping metric names to [metric values](https://github.com/tensorflow/model-analysis/blob/cdb6790dcd7a37c82afb493859b3ef4898963fee/tensorflow_model_analysis/proto/metrics_for_slice.proto#L194)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zjCxZGHmzF0R"
},
"outputs": [],
"source": [
"baseline_slice = ()\n",
"heterosexual_slice = (('sexual_orientation', 'heterosexual'),)\n",
"\n",
"print(\"Baseline metric values:\")\n",
"pp.pprint(eval_result.get_metrics_for_slice(baseline_slice))\n",
"print(\"\\nHeterosexual metric values:\")\n",
"pp.pprint(eval_result.get_metrics_for_slice(heterosexual_slice))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UDo3LhoR0Rq1"
},
"source": [
"Use [`get_metrics_for_all_slices()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_all_slices) to get the metrics for all slices as a dictionary mapping each slice to the corresponding metrics dictionary you obtain from running `get_metrics_for_slice()` on it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "96N2l2xI0fZd"
},
"outputs": [],
"source": [
"pp.pprint(eval_result.get_metrics_for_all_slices())"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "Fairness Indicators Example Colab.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.22"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Bfrh3DUze0QN"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sx-jnufYfcJG"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s1bQihY6-Y4N"
},
"source": [
"# Pandas DataFrame to Fairness Indicators Case Study\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XHTjeiUMeolM"
},
"source": [
"<div class=\"buttons-wrapper\">\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Pandas_Case_Study\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">\n",
" View on TensorFlow.org\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/colab_logo_32px.png\">\n",
" Run in Google Colab\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img width=\"32px\" src=\n",
"\t \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">\n",
" View source on GitHub\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" href=\n",
" \"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/download_logo_32px.png\">\n",
" Download notebook\n",
" </div>\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ay80altXzvgZ"
},
"source": [
"## Case Study Overview\n",
"In this case study we will apply [TensorFlow Model Analysis](https://tensorflow.github.io/model-analysis/get_started) and [Fairness Indicators](https://tensorflow.github.io/fairness-indicators) to evaluate data stored as a Pandas DataFrame, where each row contains ground truth labels, various features, and a model prediction. We will show how this workflow can be used to spot potential fairness concerns, independent of the framework one used to construct and train the model. As in this case study, we can analyze the results from any machine learning framework (e.g. TensorFlow, JAX, etc) once they are converted to a Pandas DataFrame.\n",
" \n",
"For this exercise, we will leverage the Deep Neural Network (DNN) model that was developed in the [Shape Constraints for Ethics with Tensorflow Lattice](https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints_for_ethics.ipynb#scrollTo=uc0VwsT5nvQi) case study using the Law School Admissions dataset from the Law School Admissions Council (LSAC). This classifier attempts to predict whether or not a student will pass the bar, based on their Law School Admission Test (LSAT) score and undergraduate GPA.\n",
"\n",
"## LSAC Dataset\n",
"The dataset used within this case study was originally collected for a study called '[LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series](https://eric.ed.gov/?id=ED469370)' by Linda Wightman in 1998. The dataset is currently hosted [here](http://www.seaphe.org/databases.php).\n",
"\n",
"* **dnn_bar_pass_prediction**: The LSAT prediction from the DNN model.\n",
"* **gender**: Gender of the student.\n",
"* **lsat**: LSAT score received by the student.\n",
"* **pass_bar**: Ground truth label indicating whether or not the student eventually passed the bar.\n",
"* **race**: Race of the student.\n",
"* **ugpa**: A student's undergraduate GPA.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ob01ASKqixfw"
},
"outputs": [],
"source": [
"!pip install -q -U pip==20.2\n",
"\n",
"!pip install -q -U \\\n",
" tensorflow-model-analysis==0.48.0 \\\n",
" tensorflow-data-validation==1.17.0 \\\n",
" tfx-bsl==1.17.1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tnxSvgkaSEIj"
},
"source": [
"## Importing required packages:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0q8cTfpTkEMP"
},
"outputs": [],
"source": [
"import os\n",
"import tempfile\n",
"import pandas as pd\n",
"import six.moves.urllib as urllib\n",
"import pprint\n",
"\n",
"import tensorflow_model_analysis as tfma\n",
"from google.protobuf import text_format\n",
"\n",
"import tensorflow as tf\n",
"tf.compat.v1.enable_v2_behavior()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b8kWW3t4-eS1"
},
"source": [
"## Download the data and explore the initial dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wMZJtgj0qJ0x"
},
"outputs": [],
"source": [
"# Download the LSAT dataset and setup the required filepaths.\n",
"_DATA_ROOT = tempfile.mkdtemp(prefix='lsat-data')\n",
"_DATA_PATH = 'https://storage.googleapis.com/lawschool_dataset/bar_pass_prediction.csv'\n",
"_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'bar_pass_prediction.csv')\n",
"\n",
"data = urllib.request.urlopen(_DATA_PATH)\n",
"\n",
"_LSAT_DF = pd.read_csv(data)\n",
"\n",
"# To simpliy the case study, we will only use the columns that will be used for\n",
"# our model.\n",
"_COLUMN_NAMES = [\n",
" 'dnn_bar_pass_prediction',\n",
" 'gender',\n",
" 'lsat',\n",
" 'pass_bar',\n",
" 'race1',\n",
" 'ugpa',\n",
"]\n",
"\n",
"_LSAT_DF.dropna()\n",
"_LSAT_DF['gender'] = _LSAT_DF['gender'].astype(str)\n",
"_LSAT_DF['race1'] = _LSAT_DF['race1'].astype(str)\n",
"_LSAT_DF = _LSAT_DF[_COLUMN_NAMES]\n",
"\n",
"_LSAT_DF.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GyeVg2s7-wlB"
},
"source": [
"## Configure Fairness Indicators.\n",
"There are several parameters that you’ll need to take into account when using Fairness Indicators with a DataFrame \n",
"\n",
"* Your input DataFrame must contain a prediction column and label column from your model. By default Fairness Indicators will look for a prediction column called `prediction` and a label column called `label` within your DataFrame.\n",
" * If either of these values are not found a KeyError will be raised.\n",
"\n",
"* In addition to a DataFrame, you’ll also need to include an `eval_config` that should include the metrics to compute, slices to compute the metrics on, and the column names for example labels and predictions. \n",
" * `metrics_specs` will set the metrics to compute. The `FairnessIndicators` metric will be required to render the fairness metrics and you can see a list of additional optional metrics [here](https://tensorflow.github.io/model-analysis/metrics).\n",
"\n",
" * `slicing_specs` is an optional slicing parameter to specify what feature you’re interested in investigating. Within this case study race1 is used, however you can also set this value to another feature (for example gender in the context of this DataFrame). If `slicing_specs` is not provided all features will be included.\n",
" * If your DataFrame includes a label or prediction column that is different from the default `prediction` or `label`, you can configure the `label_key` and `prediction_key` to a new value.\n",
"\n",
"* If `output_path` is not specified a temporary directory will be created."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "53caFasB5V9p"
},
"outputs": [],
"source": [
"# Specify Fairness Indicators in eval_config.\n",
"eval_config = text_format.Parse(\"\"\"\n",
" model_specs {\n",
" prediction_key: 'dnn_bar_pass_prediction',\n",
" label_key: 'pass_bar'\n",
" }\n",
" metrics_specs {\n",
" metrics {class_name: \"AUC\"}\n",
" metrics {\n",
" class_name: \"FairnessIndicators\"\n",
" config: '{\"thresholds\": [0.50, 0.90]}'\n",
" }\n",
" }\n",
" slicing_specs {\n",
" feature_keys: 'race1'\n",
" }\n",
" slicing_specs {}\n",
" \"\"\", tfma.EvalConfig())\n",
"\n",
"# Run TensorFlow Model Analysis.\n",
"eval_result = tfma.analyze_raw_data(\n",
" data=_LSAT_DF,\n",
" eval_config=eval_config,\n",
" output_path=_DATA_ROOT)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KD96mw0e--DE"
},
"source": [
"## Explore model performance with Fairness Indicators.\n",
"\n",
"After running Fairness Indicators, we can visualize different metrics that we selected to analyze our models performance. Within this case study we’ve included Fairness Indicators and arbitrarily picked AUC.\n",
"\n",
"When we first look at the overall AUC for each race slice we can see a slight discrepancy in model performance, but nothing that is arguably alarming.\n",
"\n",
"* **Asian**: 0.58\n",
"* **Black**: 0.58\n",
"* **Hispanic**: 0.58\n",
"* **Other**: 0.64\n",
"* **White**: 0.6\n",
"\n",
"However, when we look at the false negative rates split by race, our model again incorrectly predicts the likelihood of a user passing the bar at different rates and, this time, does so by a lot. \n",
"\n",
"* **Asian**: 0.01\n",
"* **Black**: 0.05\n",
"* **Hispanic**: 0.02\n",
"* **Other**: 0.01\n",
"* **White**: 0.01\n",
"\n",
"Most notably the difference between Black and White students is about 380%, meaning that our model is nearly 4x more likely to incorrectly predict that a black student will not pass the bar, than a whilte student. If we were to continue with this effort, a practitioner could use these results as a signal that they should spend more time ensuring that their model works well for people from all backgrounds."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NIdchYPb-_ZV"
},
"outputs": [],
"source": [
"# Render Fairness Indicators.\n",
"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NprhBTCbY1sF"
},
"source": [
"# tfma.EvalResult"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6f92-e98Y40r"
},
"source": [
"The [`eval_result`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult) object, rendered above in `render_fairness_indicator()`, has its own API that can be used to read TFMA results into your programs."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CDDUxdx-Y8e0"
},
"source": [
"## [`get_slice_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_slice_names) and [`get_metric_names()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metric_names)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oG_mNUNbY98t"
},
"source": [
"To get the evaluated slices and metrics, you can use the respective functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "kbA1sXhCY_G7"
},
"outputs": [],
"source": [
"pp = pprint.PrettyPrinter()\n",
"\n",
"print(\"Slices:\")\n",
"pp.pprint(eval_result.get_slice_names())\n",
"print(\"\\nMetrics:\")\n",
"pp.pprint(eval_result.get_metric_names())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rA1M8aBmZAk6"
},
"source": [
"## [`get_metrics_for_slice()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_slice) and [`get_metrics_for_all_slices()`](https://tensorflow.github.io/model-analysis/api_docs/python/tfma/#tensorflow_model_analysis.EvalResult.get_metrics_for_all_slices)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a3Ath5MsZCRX"
},
"source": [
"If you want to get the metrics for a particular slice, you can use `get_metrics_for_slice()`. It returns a dictionary mapping metric names to [metric values](https://github.com/tensorflow/model-analysis/blob/cdb6790dcd7a37c82afb493859b3ef4898963fee/tensorflow_model_analysis/proto/metrics_for_slice.proto#L194)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9BWg5HoyZDh-"
},
"outputs": [],
"source": [
"baseline_slice = ()\n",
"black_slice = (('race1', 'black'),)\n",
"\n",
"print(\"Baseline metric values:\")\n",
"pp.pprint(eval_result.get_metrics_for_slice(baseline_slice))\n",
"print(\"Black metric values:\")\n",
"pp.pprint(eval_result.get_metrics_for_slice(black_slice))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bDcOxvqBZEfg"
},
"source": [
"If you want to get the metrics for all slices, `get_metrics_for_all_slices()` returns a dictionary mapping each slice to the corresponding `get_metrics_for_slices(slice)`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "p4NQCi52ZFrw"
},
"outputs": [],
"source": [
"pp.pprint(eval_result.get_metrics_for_all_slices())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "y-nbqnSTkmW3"
},
"source": [
"## Conclusion\n",
"Within this case study we imported a dataset into a Pandas DataFrame that we then analyzed with Fairness Indicators. Understanding the results of your model and underlying data is an important step in ensuring your model doesn't reflect harmful bias. In the context of this case study we examined the the LSAC dataset and how predictions from this data could be impacted by a students race. The concept of “what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning.”<sup>1</sup> Fairness Indicator is a tool to help mitigate fairness concerns in your machine learning model.\n",
"\n",
"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](../../).\n",
"\n",
"---\n",
"\n",
"1. Hutchinson, B., Mitchell, M. (2018). 50 Years of Test (Un)fairness: Lessons for Machine Learning. https://arxiv.org/abs/1811.10104\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "REV1rBnoBAo1"
},
"source": [
"## Appendix\n",
"\n",
"Below are a few functions to help convert ML models to Pandas DataFrame.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F4qv9GXiBsFA"
},
"outputs": [],
"source": [
"# TensorFlow Estimator to Pandas DataFrame:\n",
"\n",
"# _X_VALUE = # X value of binary estimator.\n",
"# _Y_VALUE = # Y value of binary estimator.\n",
"# _GROUND_TRUTH_LABEL = # Ground truth value of binary estimator.\n",
"\n",
"def _get_predicted_probabilities(estimator, input_df, get_input_fn):\n",
" predictions = estimator.predict(\n",
" input_fn=get_input_fn(input_df=input_df, num_epochs=1))\n",
" return [prediction['probabilities'][1] for prediction in predictions]\n",
"\n",
"def _get_input_fn_law(input_df, num_epochs, batch_size=None):\n",
" return tf.compat.v1.estimator.inputs.pandas_input_fn(\n",
" x=input_df[[_X_VALUE, _Y_VALUE]],\n",
" y=input_df[_GROUND_TRUTH_LABEL],\n",
" num_epochs=num_epochs,\n",
" batch_size=batch_size or len(input_df),\n",
" shuffle=False)\n",
"\n",
"def estimator_to_dataframe(estimator, input_df, num_keypoints=20):\n",
" x = np.linspace(min(input_df[_X_VALUE]), max(input_df[_X_VALUE]), num_keypoints)\n",
" y = np.linspace(min(input_df[_Y_VALUE]), max(input_df[_Y_VALUE]), num_keypoints)\n",
"\n",
" x_grid, y_grid = np.meshgrid(x, y)\n",
"\n",
" positions = np.vstack([x_grid.ravel(), y_grid.ravel()])\n",
" plot_df = pd.DataFrame(positions.T, columns=[_X_VALUE, _Y_VALUE])\n",
" plot_df[_GROUND_TRUTH_LABEL] = np.ones(len(plot_df))\n",
" predictions = _get_predicted_probabilities(\n",
" estimator=estimator, input_df=plot_df, get_input_fn=_get_input_fn_law)\n",
" return pd.DataFrame(\n",
" data=np.array(np.reshape(predictions, x_grid.shape)).flatten())"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [
"Bfrh3DUze0QN"
],
"name": "Pandas DataFrame to Fairness Indicators Case Study",
"private_outputs": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.22"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "JmvzTcYice-_"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zlvAS8a9cD_t"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b2VYQpTttmVN"
},
"source": [
"# TensorFlow Constrained Optimization Example Using CelebA Dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3iFsS2WSeRwe"
},
"source": [
"<div class=\"buttons-wrapper\">\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">\n",
" View on TensorFlow.org\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/colab_logo_32px.png\">\n",
" Run in Google Colab\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img width=\"32px\" src=\n",
"\t \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">\n",
" View source on GitHub\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" href=\n",
" \"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/download_logo_32px.png\">\n",
" Download notebook\n",
" </div>\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-DQoReGDeN16"
},
"source": [
"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](../../). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\n",
"\n",
"\n",
"* Train a simple, *unconstrained* neural network model to detect a person's smile in images using [`tf.keras`](https://www.tensorflow.org/guide/keras) and the large-scale CelebFaces Attributes ([CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)) dataset.\n",
"* Evaluate model performance against a commonly used fairness metric across age groups, using Fairness Indicators.\n",
"* Set up a simple constrained optimization problem to achieve fairer performance across age groups.\n",
"* Retrain the now *constrained* model and evaluate performance again, ensuring that our chosen fairness metric has improved.\n",
"\n",
"Last updated: 3/11 Feb 2020"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JyCbEWt5Zxe2"
},
"source": [
"# Installation\n",
"This notebook was created in [Colaboratory](https://research.google.com/colaboratory/faq.html), connected to the Python 3 Google Compute Engine backend. If you wish to host this notebook in a different environment, then you should not experience any major issues provided you include all the required packages in the cells below.\n",
"\n",
"Note that the very first time you run the pip installs, you may be asked to restart the runtime because of preinstalled out of date packages. Once you do so, the correct packages will be used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "T-Zm-KDdt0bn"
},
"outputs": [],
"source": [
"#@title Pip installs\n",
"!pip install -q -U pip==20.2\n",
"\n",
"!pip install git+https://github.com/google-research/tensorflow_constrained_optimization\n",
"!pip install -q tensorflow-datasets tensorflow\n",
"!pip install fairness-indicators \\\n",
" \"absl-py==0.12.0\" \\\n",
" \"apache-beam<3,>=2.47\" \\\n",
" \"avro-python3==1.9.1\" \\\n",
" \"pyzmq==17.0.0\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UXWXhBLvISOY"
},
"source": [
"Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UTBBdSGaZ8aW"
},
"outputs": [],
"source": [
"#@title Import Modules\n",
"import os\n",
"import sys\n",
"import tempfile\n",
"import urllib\n",
"\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"\n",
"import tensorflow_datasets as tfds\n",
"tfds.disable_progress_bar()\n",
"\n",
"import numpy as np\n",
"\n",
"import tensorflow_constrained_optimization as tfco\n",
"\n",
"from tensorflow_metadata.proto.v0 import schema_pb2\n",
"from tfx_bsl.tfxio import tensor_adapter\n",
"from tfx_bsl.tfxio import tf_example_record"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "70tLum8uIZUm"
},
"source": [
"Additionally, we add a few imports that are specific to Fairness Indicators which we will use to evaluate and visualize the model's performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "7Se0Z0Bo9K-5"
},
"outputs": [],
"source": [
"#@title Fairness Indicators related imports\n",
"import tensorflow_model_analysis as tfma\n",
"import fairness_indicators as fi\n",
"from google.protobuf import text_format\n",
"import apache_beam as beam"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xSG2HP7goGrj"
},
"source": [
"Although TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default as it is in TensorFlow 2.x. To ensure that nothing breaks, eager execution will be enabled in the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "W0ZusW1-lBao"
},
"outputs": [],
"source": [
"#@title Enable Eager Execution and Print Versions\n",
"if tf.__version__ < \"2.0.0\":\n",
" tf.compat.v1.enable_eager_execution()\n",
" print(\"Eager execution enabled.\")\n",
"else:\n",
" print(\"Eager execution enabled by default.\")\n",
"\n",
"print(\"TensorFlow \" + tf.__version__)\n",
"print(\"TFMA \" + tfma.VERSION_STRING)\n",
"print(\"TFDS \" + tfds.version.__version__)\n",
"print(\"FI \" + fi.version.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "idY3Uuk3yvty"
},
"source": [
"# CelebA Dataset\n",
"[CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) is a large-scale face attributes dataset with more than 200,000 celebrity images, each with 40 attribute annotations (such as hair type, fashion accessories, facial features, etc.) and 5 landmark locations (eyes, mouth and nose positions). For more details take a look at [the paper](https://liuziwei7.github.io/projects/FaceAttributes.html).\n",
"With the permission of the owners, we have stored this dataset on Google Cloud Storage and mostly access it via [TensorFlow Datasets(`tfds`)](https://www.tensorflow.org/datasets).\n",
"\n",
"In this notebook:\n",
"* Our model will attempt to classify whether the subject of the image is smiling, as represented by the \"Smiling\" attribute<sup>*</sup>.\n",
"* Images will be resized from 218x178 to 28x28 to reduce the execution time and memory when training.\n",
"* Our model's performance will be evaluated across age groups, using the binary \"Young\" attribute. We will call this \"age group\" in this notebook.\n",
"\n",
"___\n",
"\n",
"<sup>*</sup> While there is little information available about the labeling methodology for this dataset, we will assume that the \"Smiling\" attribute was determined by a pleased, kind, or amused expression on the subject's face. For the purpose of this case study, we will take these labels as ground truth.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zCSemFST0b89"
},
"outputs": [],
"source": [
"gcs_base_dir = \"gs://celeb_a_dataset/\"\n",
"celeb_a_builder = tfds.builder(\"celeb_a\", data_dir=gcs_base_dir, version='2.0.0')\n",
"\n",
"celeb_a_builder.download_and_prepare()\n",
"\n",
"num_test_shards_dict = {'0.3.0': 4, '2.0.0': 2} # Used because we download the test dataset separately\n",
"version = str(celeb_a_builder.info.version)\n",
"print('Celeb_A dataset version: %s' % version)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "Ocqv3R06APfW"
},
"outputs": [],
"source": [
"#@title Test dataset helper functions\n",
"local_root = tempfile.mkdtemp(prefix='test-data')\n",
"def local_test_filename_base():\n",
" return local_root\n",
"\n",
"def local_test_file_full_prefix():\n",
" return os.path.join(local_test_filename_base(), \"celeb_a-test.tfrecord\")\n",
"\n",
"def copy_test_files_to_local():\n",
" filename_base = local_test_file_full_prefix()\n",
" num_test_shards = num_test_shards_dict[version]\n",
" for shard in range(num_test_shards):\n",
" url = \"https://storage.googleapis.com/celeb_a_dataset/celeb_a/%s/celeb_a-test.tfrecord-0000%s-of-0000%s\" % (version, shard, num_test_shards)\n",
" filename = \"%s-0000%s-of-0000%s\" % (filename_base, shard, num_test_shards)\n",
" res = urllib.request.urlretrieve(url, filename)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u5PDLXZb_uIj"
},
"source": [
"## Caveats\n",
"Before moving forward, there are several considerations to keep in mind in using CelebA:\n",
"* Although in principle this notebook could use any dataset of face images, CelebA was chosen because it contains public domain images of public figures.\n",
"* All of the attribute annotations in CelebA are operationalized as binary categories. For example, the \"Young\" attribute (as determined by the dataset labelers) is denoted as either present or absent in the image.\n",
"* CelebA's categorizations do not reflect real human diversity of attributes.\n",
"* For the purposes of this notebook, the feature containing the \"Young\" attribute is referred to as \"age group\", where the presence of the \"Young\" attribute in an image is labeled as a member of the \"Young\" age group and the absence of the \"Young\" attribute is labeled as a member of the \"Not Young\" age group. These are assumptions made as this information is not mentioned in the [original paper](http://openaccess.thecvf.com/content_iccv_2015/html/Liu_Deep_Learning_Face_ICCV_2015_paper.html).\n",
"* As such, performance in the models trained in this notebook is tied to the ways the attributes have been operationalized and annotated by the authors of CelebA.\n",
"* This model should not be used for commercial purposes as that would violate [CelebA's non-commercial research agreement](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Elkiu92cY2bY"
},
"source": [
"# Setting Up Input Functions\n",
"The subsequent cells will help streamline the input pipeline as well as visualize performance.\n",
"\n",
"First we define some data-related variables and define a requisite preprocessing function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gDdarTZxk6y4"
},
"outputs": [],
"source": [
"#@title Define Variables\n",
"ATTR_KEY = \"attributes\"\n",
"IMAGE_KEY = \"image\"\n",
"LABEL_KEY = \"Smiling\"\n",
"GROUP_KEY = \"Young\"\n",
"IMAGE_SIZE = 28"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "SD-H70Je0cTp"
},
"outputs": [],
"source": [
"#@title Define Preprocessing Functions\n",
"def preprocess_input_dict(feat_dict):\n",
" # Separate out the image and target variable from the feature dictionary.\n",
" image = feat_dict[IMAGE_KEY]\n",
" label = feat_dict[ATTR_KEY][LABEL_KEY]\n",
" group = feat_dict[ATTR_KEY][GROUP_KEY]\n",
"\n",
" # Resize and normalize image.\n",
" image = tf.cast(image, tf.float32)\n",
" image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\n",
" image /= 255.0\n",
"\n",
" # Cast label and group to float32.\n",
" label = tf.cast(label, tf.float32)\n",
" group = tf.cast(group, tf.float32)\n",
"\n",
" feat_dict[IMAGE_KEY] = image\n",
" feat_dict[ATTR_KEY][LABEL_KEY] = label\n",
" feat_dict[ATTR_KEY][GROUP_KEY] = group\n",
"\n",
" return feat_dict\n",
"\n",
"get_image_and_label = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY])\n",
"get_image_label_and_group = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY], feat_dict[ATTR_KEY][GROUP_KEY])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iwg3sPmExciD"
},
"source": [
"Then, we build out the data functions we need in the rest of the colab."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KbR64r0VVG5h"
},
"outputs": [],
"source": [
"# Train data returning either 2 or 3 elements (the third element being the group)\n",
"def celeb_a_train_data_wo_group(batch_size):\n",
" celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\n",
" return celeb_a_train_data.map(get_image_and_label)\n",
"def celeb_a_train_data_w_group(batch_size):\n",
" celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\n",
" return celeb_a_train_data.map(get_image_label_and_group)\n",
"\n",
"# Test data for the overall evaluation\n",
"celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)\n",
"# Copy test data locally to be able to read it into tfma\n",
"copy_test_files_to_local()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NXO3woTxiCk0"
},
"source": [
"# Build a simple DNN Model\n",
"Because this notebook focuses on TFCO, we will assemble a simple, unconstrained `tf.keras.Sequential` model.\n",
"\n",
"We may be able to greatly improve model performance by adding some complexity (e.g., more densely-connected layers, exploring different activation functions, increasing image size), but that may distract from the goal of demonstrating how easy it is to apply the TFCO library when working with Keras. For that reason, the model will be kept simple — but feel encouraged to explore this space."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RNZhN_zU8DRD"
},
"outputs": [],
"source": [
"def create_model():\n",
" # For this notebook, accuracy will be used to evaluate performance.\n",
" METRICS = [\n",
" tf.keras.metrics.BinaryAccuracy(name='accuracy')\n",
" ]\n",
"\n",
" # The model consists of:\n",
" # 1. An input layer that represents the 28x28x3 image flatten.\n",
" # 2. A fully connected layer with 64 units activated by a ReLU function.\n",
" # 3. A single-unit readout layer to output real-scores instead of probabilities.\n",
" model = keras.Sequential([\n",
" keras.layers.Flatten(input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), name='image'),\n",
" keras.layers.Dense(64, activation='relu'),\n",
" keras.layers.Dense(1, activation=None)\n",
" ])\n",
"\n",
" # TFCO by default uses hinge loss — and that will also be used in the model.\n",
" model.compile(\n",
" optimizer=tf.keras.optimizers.Adam(0.001),\n",
" loss='hinge',\n",
" metrics=METRICS)\n",
" return model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7A4uKPNVzPVO"
},
"source": [
"We also define a function to set seeds to ensure reproducible results. Note that this colab is meant as an educational tool and does not have the stability of a finely tuned production pipeline. Running without setting a seed may lead to varied results. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-IVw4EgKzqSF"
},
"outputs": [],
"source": [
"def set_seeds():\n",
" np.random.seed(121212)\n",
" tf.compat.v1.set_random_seed(212121)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Xrbjmmeom8pA"
},
"source": [
"# Fairness Indicators Helper Functions\n",
"Before training our model, we define a number of helper functions that will allow us to evaluate the model's performance via Fairness Indicators.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1EPF_k620CRN"
},
"source": [
"First, we create a helper function to save our model once we train it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ejHbhLW5epar"
},
"outputs": [],
"source": [
"def save_model(model, subdir):\n",
" base_dir = tempfile.mkdtemp(prefix='saved_models')\n",
" model_location = os.path.join(base_dir, subdir)\n",
" model.save(model_location, save_format='tf')\n",
" return model_location"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "erhKEvqByCNj"
},
"source": [
"Next, we define functions used to preprocess the data in order to correctly pass it through to TFMA."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "D2qa8Okwj_U3"
},
"outputs": [],
"source": [
"#@title Data Preprocessing functions for \n",
"def tfds_filepattern_for_split(dataset_name, split):\n",
" return f\"{local_test_file_full_prefix()}*\"\n",
"\n",
"class PreprocessCelebA(object):\n",
" \"\"\"Class that deserializes, decodes and applies additional preprocessing for CelebA input.\"\"\"\n",
" def __init__(self, dataset_name):\n",
" builder = tfds.builder(dataset_name)\n",
" self.features = builder.info.features\n",
" example_specs = self.features.get_serialized_info()\n",
" self.parser = tfds.core.example_parser.ExampleParser(example_specs)\n",
"\n",
" def __call__(self, serialized_example):\n",
" # Deserialize\n",
" deserialized_example = self.parser.parse_example(serialized_example)\n",
" # Decode\n",
" decoded_example = self.features.decode_example(deserialized_example)\n",
" # Additional preprocessing\n",
" image = decoded_example[IMAGE_KEY]\n",
" label = decoded_example[ATTR_KEY][LABEL_KEY]\n",
" # Resize and scale image.\n",
" image = tf.cast(image, tf.float32)\n",
" image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\n",
" image /= 255.0\n",
" image = tf.reshape(image, [-1])\n",
" # Cast label and group to float32.\n",
" label = tf.cast(label, tf.float32)\n",
"\n",
" group = decoded_example[ATTR_KEY][GROUP_KEY]\n",
" \n",
" output = tf.train.Example()\n",
" output.features.feature[IMAGE_KEY].float_list.value.extend(image.numpy().tolist())\n",
" output.features.feature[LABEL_KEY].float_list.value.append(label.numpy())\n",
" output.features.feature[GROUP_KEY].bytes_list.value.append(b\"Young\" if group.numpy() else b'Not Young')\n",
" return output.SerializeToString()\n",
"\n",
"def tfds_as_pcollection(beam_pipeline, dataset_name, split):\n",
" return (\n",
" beam_pipeline\n",
" | 'Read records' >> beam.io.ReadFromTFRecord(tfds_filepattern_for_split(dataset_name, split))\n",
" | 'Preprocess' >> beam.Map(PreprocessCelebA(dataset_name))\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fBKvxd2Tz3hK"
},
"source": [
"Finally, we define a function that evaluates the results in TFMA."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "30YduitftaNB"
},
"outputs": [],
"source": [
"def get_eval_results(model_location, eval_subdir):\n",
" base_dir = tempfile.mkdtemp(prefix='saved_eval_results')\n",
" tfma_eval_result_path = os.path.join(base_dir, eval_subdir)\n",
"\n",
" eval_config_pbtxt = \"\"\"\n",
" model_specs {\n",
" label_key: \"%s\"\n",
" }\n",
" metrics_specs {\n",
" metrics {\n",
" class_name: \"FairnessIndicators\"\n",
" config: '{ \"thresholds\": [0.22, 0.5, 0.75] }'\n",
" }\n",
" metrics {\n",
" class_name: \"ExampleCount\"\n",
" }\n",
" }\n",
" slicing_specs {}\n",
" slicing_specs { feature_keys: \"%s\" }\n",
" options {\n",
" compute_confidence_intervals { value: False }\n",
" disabled_outputs{values: \"analysis\"}\n",
" }\n",
" \"\"\" % (LABEL_KEY, GROUP_KEY)\n",
" \n",
" eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())\n",
"\n",
" eval_shared_model = tfma.default_eval_shared_model(\n",
" eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING])\n",
"\n",
" schema_pbtxt = \"\"\"\n",
" tensor_representation_group {\n",
" key: \"\"\n",
" value {\n",
" tensor_representation {\n",
" key: \"%s\"\n",
" value {\n",
" dense_tensor {\n",
" column_name: \"%s\"\n",
" shape {\n",
" dim { size: 28 }\n",
" dim { size: 28 }\n",
" dim { size: 3 }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: FLOAT\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: FLOAT\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: BYTES\n",
" }\n",
" \"\"\" % (IMAGE_KEY, IMAGE_KEY, IMAGE_KEY, LABEL_KEY, GROUP_KEY)\n",
" schema = text_format.Parse(schema_pbtxt, schema_pb2.Schema())\n",
" coder = tf_example_record.TFExampleBeamRecord(\n",
" physical_format='inmem', schema=schema,\n",
" raw_record_column_name=tfma.ARROW_INPUT_COLUMN)\n",
" tensor_adapter_config = tensor_adapter.TensorAdapterConfig(\n",
" arrow_schema=coder.ArrowSchema(),\n",
" tensor_representations=coder.TensorRepresentations())\n",
" # Run the fairness evaluation.\n",
" with beam.Pipeline() as pipeline:\n",
" _ = (\n",
" tfds_as_pcollection(pipeline, 'celeb_a', 'test')\n",
" | 'ExamplesToRecordBatch' >> coder.BeamSource()\n",
" | 'ExtractEvaluateAndWriteResults' >>\n",
" tfma.ExtractEvaluateAndWriteResults(\n",
" eval_config=eval_config,\n",
" eval_shared_model=eval_shared_model,\n",
" output_path=tfma_eval_result_path,\n",
" tensor_adapter_config=tensor_adapter_config)\n",
" )\n",
" return tfma.load_eval_result(output_path=tfma_eval_result_path)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "76tZ3vk-tyo9"
},
"source": [
"# Train & Evaluate Unconstrained Model\n",
"\n",
"With the model now defined and the input pipeline in place, we’re now ready to train our model. To cut back on the amount of execution time and memory, we will train the model by slicing the data into small batches with only a few repeated iterations.\n",
"\n",
"Note that running this notebook in TensorFlow < 2.0.0 may result in a deprecation warning for `np.where`. Safely ignore this warning as TensorFlow addresses this in 2.X by using `tf.where` in place of `np.where`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3m9OOdU_8GWo"
},
"outputs": [],
"source": [
"BATCH_SIZE = 32\n",
"\n",
"# Set seeds to get reproducible results\n",
"set_seeds()\n",
"\n",
"model_unconstrained = create_model()\n",
"model_unconstrained.fit(celeb_a_train_data_wo_group(BATCH_SIZE), epochs=5, steps_per_epoch=1000)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nCtBH9DkvtUy"
},
"source": [
"Evaluating the model on the test data should result in a final accuracy score of just over 85%. Not bad for a simple model with no fine tuning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mgsjbxpTIdZf"
},
"outputs": [],
"source": [
"print('Overall Results, Unconstrained')\n",
"celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)\n",
"results = model_unconstrained.evaluate(celeb_a_test_data)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L5jslIrzwIKo"
},
"source": [
"However, performance evaluated across age groups may reveal some shortcomings.\n",
"\n",
"To explore this further, we evaluate the model with Fairness Indicators (via TFMA). In particular, we are interested in seeing whether there is a significant gap in performance between \"Young\" and \"Not Young\" categories when evaluated on false positive rate.\n",
"\n",
"A false positive error occurs when the model incorrectly predicts the positive class. In this context, a false positive outcome occurs when the ground truth is an image of a celebrity 'Not Smiling' and the model predicts 'Smiling'. By extension, the false positive rate, which is used in the visualization above, is a measure of accuracy for a test. While this is a relatively mundane error to make in this context, false positive errors can sometimes cause more problematic behaviors. For instance, a false positive error in a spam classifier could cause a user to miss an important email."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nFL91nZF1V8D"
},
"outputs": [],
"source": [
"model_location = save_model(model_unconstrained, 'model_export_unconstrained')\n",
"eval_results_unconstrained = get_eval_results(model_location, 'eval_results_unconstrained')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "34zHIMW0NHld"
},
"source": [
"As mentioned above, we are concentrating on the false positive rate. The current version of Fairness Indicators (0.1.2) selects false negative rate by default. After running the line below, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KXMVmUMi0ydk"
},
"outputs": [],
"source": [
"tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_results_unconstrained)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zYVpZ-DpBsfD"
},
"source": [
"As the results show above, we do see a **disproportionate gap between \"Young\" and \"Not Young\" categories**.\n",
"\n",
"This is where TFCO can help by constraining the false positive rate to be within a more acceptable criterion.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZNnI_Eu70gVp"
},
"source": [
"# Constrained Model Set Up\n",
"As documented in [TFCO's library](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md), there are several helpers that will make it easier to constrain the problem:\n",
"\n",
"1. `tfco.rate_context()` – This is what will be used in constructing a constraint for each age group category.\n",
"2. `tfco.RateMinimizationProblem()`– The rate expression to be minimized here will be the false positive rate subject to age group. In other words, performance now will be evaluated based on the difference between the false positive rates of the age group and that of the overall dataset. For this demonstration, a false positive rate of less than or equal to 5% will be set as the constraint.\n",
"3. `tfco.ProxyLagrangianOptimizerV2()` – This is the helper that will actually solve the rate constraint problem.\n",
"\n",
"The cell below will call on these helpers to set up model training with the fairness constraint.\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BTukzvfD6iWr"
},
"outputs": [],
"source": [
"# The batch size is needed to create the input, labels and group tensors.\n",
"# These tensors are initialized with all 0's. They will eventually be assigned\n",
"# the batch content to them. A large batch size is chosen so that there are\n",
"# enough number of \"Young\" and \"Not Young\" examples in each batch.\n",
"set_seeds()\n",
"model_constrained = create_model()\n",
"BATCH_SIZE = 32\n",
"\n",
"# Create input tensor.\n",
"input_tensor = tf.Variable(\n",
" np.zeros((BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, 3), dtype=\"float32\"),\n",
" name=\"input\")\n",
"\n",
"# Create labels and group tensors (assuming both labels and groups are binary).\n",
"labels_tensor = tf.Variable(\n",
" np.zeros(BATCH_SIZE, dtype=\"float32\"), name=\"labels\")\n",
"groups_tensor = tf.Variable(\n",
" np.zeros(BATCH_SIZE, dtype=\"float32\"), name=\"groups\")\n",
"\n",
"# Create a function that returns the applied 'model' to the input tensor\n",
"# and generates constrained predictions.\n",
"def predictions():\n",
" return model_constrained(input_tensor)\n",
"\n",
"# Create overall context and subsetted context.\n",
"# The subsetted context contains subset of examples where group attribute < 1\n",
"# (i.e. the subset of \"Not Young\" celebrity images).\n",
"# \"groups_tensor < 1\" is used instead of \"groups_tensor == 0\" as the former\n",
"# would be a comparison on the tensor value, while the latter would be a\n",
"# comparison on the Tensor object.\n",
"context = tfco.rate_context(predictions, labels=lambda:labels_tensor)\n",
"context_subset = context.subset(lambda:groups_tensor < 1)\n",
"\n",
"# Setup list of constraints.\n",
"# In this notebook, the constraint will just be: FPR to less or equal to 5%.\n",
"constraints = [tfco.false_positive_rate(context_subset) <= 0.05]\n",
"\n",
"# Setup rate minimization problem: minimize overall error rate s.t. constraints.\n",
"problem = tfco.RateMinimizationProblem(tfco.error_rate(context), constraints)\n",
"\n",
"# Create constrained optimizer and obtain train_op.\n",
"# Separate optimizers are specified for the objective and constraints\n",
"optimizer = tfco.ProxyLagrangianOptimizerV2(\n",
" optimizer=tf.keras.optimizers.legacy.Adam(learning_rate=0.001),\n",
" constraint_optimizer=tf.keras.optimizers.legacy.Adam(learning_rate=0.001),\n",
" num_constraints=problem.num_constraints)\n",
"\n",
"# A list of all trainable variables is also needed to use TFCO.\n",
"var_list = (model_constrained.trainable_weights + list(problem.trainable_variables) +\n",
" optimizer.trainable_variables())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "thEe8A8UYbrO"
},
"source": [
"The model is now set up and ready to be trained with the false positive rate constraint across age group.\n",
"\n",
"Now, because the last iteration of the constrained model may not necessarily be the best performing model in terms of the defined constraint, the TFCO library comes equipped with `tfco.find_best_candidate_index()` that can help choose the best iterate out of the ones found after each epoch. Think of `tfco.find_best_candidate_index()` as an added heuristic that ranks each of the outcomes based on accuracy and fairness constraint (in this case, false positive rate across age group) separately with respect to the training data. That way, it can search for a better trade-off between overall accuracy and the fairness constraint.\n",
"\n",
"The following cells will start the training with constraints while also finding the best performing model per iteration."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "73doG4HL6nPS"
},
"outputs": [],
"source": [
"# Obtain train set batches.\n",
"\n",
"NUM_ITERATIONS = 100 # Number of training iterations.\n",
"SKIP_ITERATIONS = 10 # Print training stats once in this many iterations.\n",
"\n",
"# Create temp directory for saving snapshots of models.\n",
"temp_directory = tempfile.mktemp()\n",
"os.mkdir(temp_directory)\n",
"\n",
"# List of objective and constraints across iterations.\n",
"objective_list = []\n",
"violations_list = []\n",
"\n",
"# Training iterations.\n",
"iteration_count = 0\n",
"for (image, label, group) in celeb_a_train_data_w_group(BATCH_SIZE):\n",
" # Assign current batch to input, labels and groups tensors.\n",
" input_tensor.assign(image)\n",
" labels_tensor.assign(label)\n",
" groups_tensor.assign(group)\n",
"\n",
" # Run gradient update.\n",
" optimizer.minimize(problem, var_list=var_list)\n",
"\n",
" # Record objective and violations.\n",
" objective = problem.objective()\n",
" violations = problem.constraints()\n",
"\n",
" sys.stdout.write(\n",
" \"\\r Iteration %d: Hinge Loss = %.3f, Max. Constraint Violation = %.3f\"\n",
" % (iteration_count + 1, objective, max(violations)))\n",
"\n",
" # Snapshot model once in SKIP_ITERATIONS iterations.\n",
" if iteration_count % SKIP_ITERATIONS == 0:\n",
" objective_list.append(objective)\n",
" violations_list.append(violations)\n",
"\n",
" # Save snapshot of model weights.\n",
" model_constrained.save_weights(\n",
" temp_directory + \"/celeb_a_constrained_\" +\n",
" str(iteration_count / SKIP_ITERATIONS) + \".h5\")\n",
"\n",
" iteration_count += 1\n",
" if iteration_count >= NUM_ITERATIONS:\n",
" break\n",
"\n",
"# Choose best model from recorded iterates and load that model.\n",
"best_index = tfco.find_best_candidate_index(\n",
" np.array(objective_list), np.array(violations_list))\n",
"\n",
"model_constrained.load_weights(\n",
" temp_directory + \"/celeb_a_constrained_\" + str(best_index) + \".0.h5\")\n",
"\n",
"# Remove temp directory.\n",
"os.system(\"rm -r \" + temp_directory)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6r-6_R_gSrsT"
},
"source": [
"After having applied the constraint, we evaluate the results once again using Fairness Indicators."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5G6B3OR9CUmo"
},
"outputs": [],
"source": [
"model_location = save_model(model_constrained, 'model_export_constrained')\n",
"eval_result_constrained = get_eval_results(model_location, 'eval_results_constrained')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sVteOnE80ATS"
},
"source": [
"As with the previous time we used Fairness Indicators, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.\n",
"\n",
"Note that to fairly compare the two versions of our model, it is important to use thresholds that set the overall false positive rate to be roughly equal. This ensures that we are looking at actual change as opposed to just a shift in the model equivalent to simply moving the threshold boundary. In our case, comparing the unconstrained model at 0.5 and the constrained model at 0.22 provides a fair comparison for the models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GRIjYftvuc7b"
},
"outputs": [],
"source": [
"eval_results_dict = {\n",
" 'constrained': eval_result_constrained,\n",
" 'unconstrained': eval_results_unconstrained,\n",
"}\n",
"tfma.addons.fairness.view.widget_view.render_fairness_indicator(multi_eval_results=eval_results_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lrT-7EBrcBvV"
},
"source": [
"With TFCO's ability to express a more complex requirement as a rate constraint, we helped this model achieve a more desirable outcome with little impact to the overall performance. There is, of course, still room for improvement, but at least TFCO was able to find a model that gets close to satisfying the constraint and reduces the disparity between the groups as much as possible."
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "Fairness Indicators TFCO CelebA Case Study.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.22"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
================================================
FILE: docs/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "jMqk3Z8EciF8"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XbpNOB-vJVKu"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bqdaOVRxWs8v"
},
"source": [
"# Wiki Talk Comments Toxicity Prediction"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EG_KEDkodWsT"
},
"source": [
"<div class=\"buttons-wrapper\">\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">\n",
" View on TensorFlow.org\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/colab_logo_32px.png\">\n",
" Run in Google Colab\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" target=\"_blank\" href=\n",
" \"https://github.com/tensorflow/fairness-indicators/tree/master/docs/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img width=\"32px\" src=\n",
"\t \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">\n",
" View source on GitHub\n",
" </div>\n",
" </a>\n",
" <a class=\"md-button\" href=\n",
" \"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb\">\n",
" <div class=\"buttons-content\">\n",
" <img src=\n",
"\t \"https://www.tensorflow.org/images/download_logo_32px.png\">\n",
" Download notebook\n",
" </div>\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "y6T5tlXcdW7J"
},
"source": [
"In this example, we consider the task of predicting whether a discussion comment posted on a Wiki talk page contains toxic content (i.e. contains content that is “rude, disrespectful or unreasonable”). We use a public <a href=\"https://figshare.com/articles/Wikipedia_Talk_Labels_Toxicity/4563973\">dataset</a> released by the <a href=\"https://conversationai.github.io/\">Conversation AI</a> project, which contains over 100k comments from the English Wikipedia that are annotated by crowd workers (see [paper](https://arxiv.org/pdf/1610.08914.pdf) for labeling methodology).\n",
"\n",
"One of the challenges with this dataset is that a very small proportion of the comments cover sensitive topics such as sexuality or religion. As such, training a neural network model on this dataset leads to disparate performance on the smaller sensitive topics. This can mean that innocuous statements about those topics might get incorrectly flagged as ‘toxic’ at higher rates, causing speech to be unfairly censored\n",
"\n",
"By imposing constraints during training, we can train a *fairer* model that performs more equitably across the different topic groups. \n",
"\n",
"We will use the TFCO library to optimize for our fairness goal during training."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DG_C2gsAKV7x"
},
"source": [
"## Installation\n",
"\n",
"Let's first install and import the relevant libraries. Note that you may have to restart your colab once after running the first cell because of outdated packages in the runtime. After doing so, there should be no further issues with imports."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0XOLn8Pyrc_s"
},
"outputs": [],
"source": [
"#@title pip installs\n",
"!pip install git+https://github.com/google-research/tensorflow_constrained_optimization\n",
"!pip install git+https://github.com/tensorflow/fairness-indicators"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2ZkQDo2xcDXU"
},
"source": [
"Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "nd_Y6CTnWs8w"
},
"outputs": [],
"source": [
"#@title Import Modules\n",
"import io\n",
"import os\n",
"import shutil\n",
"import sys\n",
"import tempfile\n",
"import time\n",
"import urllib\n",
"import zipfile\n",
"\n",
"import apache_beam as beam\n",
"from IPython.display import display\n",
"from IPython.display import HTML\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"import tensorflow as tf\n",
"import tensorflow.keras as keras\n",
"from tensorflow.keras import layers\n",
"from tensorflow.keras.preprocessing import sequence\n",
"from tensorflow.keras.preprocessing import text\n",
"import tensorflow_constrained_optimization as tfco\n",
"import tensorflow_model_analysis as tfma\n",
"import fairness_indicators as fi\n",
"from tensorflow_model_analysis.addons.fairness.view import widget_view\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor\n",
"from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GvqR564dLEVa"
},
"source": [
"Though TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default. To ensure that nothing breaks, eager execution will be enabled in the cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "avMBqzjWct4Z"
},
"outputs": [],
"source": [
"#@title Enable Eager Execution and Print Versions\n",
"if tf.__version__ < \"2.0.0\":\n",
" tf.enable_eager_execution()\n",
" print(\"Eager execution enabled.\")\n",
"else:\n",
" print(\"Eager execution enabled by default.\")\n",
"\n",
"print(\"TensorFlow \" + tf.__version__)\n",
"print(\"TFMA \" + tfma.__version__)\n",
"print(\"FI \" + fi.version.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YUJyWaAwWs83"
},
"source": [
"## Hyper-parameters\n",
"\n",
"First, we set some hyper-parameters needed for the data preprocessing and model training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1aXlwlqTWs84"
},
"outputs": [],
"source": [
"hparams = {\n",
" \"batch_size\": 128,\n",
" \"cnn_filter_sizes\": [128, 128, 128],\n",
" \"cnn_kernel_sizes\": [5, 5, 5],\n",
" \"cnn_pooling_sizes\": [5, 5, 40],\n",
" \"constraint_learning_rate\": 0.01,\n",
" \"embedding_dim\": 100,\n",
" \"embedding_trainable\": False,\n",
" \"learning_rate\": 0.005,\n",
" \"max_num_words\": 10000,\n",
" \"max_sequence_length\": 250\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0PMs8Iwxq98C"
},
"source": [
"## Load and pre-process dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "DIe2JRDeWs87"
},
"source": [
"Next, we download the dataset and preprocess it. The train, test and validation sets are provided as separate CSV files."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "rcd2CV7pWs88"
},
"outputs": [],
"source": [
"toxicity_data_url = (\"https://github.com/conversationai/unintended-ml-bias-analysis/\"\n",
" \"raw/e02b9f12b63a39235e57ba6d3d62d8139ca5572c/data/\")\n",
"\n",
"data_train = pd.read_csv(toxicity_data_url + \"wiki_train.csv\")\n",
"data_test = pd.read_csv(toxicity_data_url + \"wiki_test.csv\")\n",
"data_vali = pd.read_csv(toxicity_data_url + \"wiki_dev.csv\")\n",
"\n",
"data_train.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ojo617RIWs8_"
},
"source": [
"The `comment` column contains the discussion comments and `is_toxic` column indicates whether or not a comment is annotated as toxic. \n",
"\n",
"In the following, we:\n",
"1. Separate out the labels\n",
"2. Tokenize the text comments\n",
"3. Identify comments that contain sensitive topic terms \n",
"\n",
"First, we separate the labels from the train, test and validation sets. The labels are all binary (0 or 1)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mxo7ny90Ws9A"
},
"outputs": [],
"source": [
"labels_train = data_train[\"is_toxic\"].values.reshape(-1, 1) * 1.0\n",
"labels_test = data_test[\"is_toxic\"].values.reshape(-1, 1) * 1.0\n",
"labels_vali = data_vali[\"is_toxic\"].values.reshape(-1, 1) * 1.0"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "alrWi6jUWs9C"
},
"source": [
"Next, we tokenize the textual comments using the `Tokenizer` provided by `Keras`. We use the training set comments alone to build a vocabulary of tokens, and use them to convert all the comments into a (padded) sequence of tokens of the same length."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yvOTBsrHWs9D"
},
"outputs": [],
"source": [
"tokenizer = text.Tokenizer(num_words=hparams[\"max_num_words\"])\n",
"tokenizer.fit_on_texts(data_train[\"comment\"])\n",
"\n",
"def prep_text(texts, tokenizer, max_sequence_length):\n",
" # Turns text into into padded sequences.\n",
" text_sequences = tokenizer.texts_to_sequences(texts)\n",
" return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)\n",
"\n",
"text_train = prep_text(data_train[\"comment\"], tokenizer, hparams[\"max_sequence_length\"])\n",
"text_test = prep_text(data_test[\"comment\"], tokenizer, hparams[\"max_sequence_length\"])\n",
"text_vali = prep_text(data_vali[\"comment\"], tokenizer, hparams[\"max_sequence_length\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Cn5zbgp-Ws9F"
},
"source": [
"Finally, we identify comments related to certain sensitive topic groups. We consider a subset of the <a href=\"https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/bias_madlibs_data/adjectives_people.txt\">identity terms</a> provided with the dataset and group them into\n",
"four broad topic groups: *sexuality*, *gender identity*, *religion*, and *race*."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EnFfV2gEWs9G"
},
"outputs": [],
"source": [
"terms = {\n",
" 'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'], \n",
" 'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],\n",
" 'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],\n",
" 'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina', \n",
" 'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese', \n",
" 'japanese']}\n",
"\n",
"group_names = list(terms.keys())\n",
"num_groups = len(group_names)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ooI3F5M4Ws9I"
},
"source": [
"We then create separate group membership matrices for the train, test and validation sets, where the rows correspond to comments, the columns correspond to the four sensitive groups, and each entry is a boolean indicating whether the comment contains a term from the topic group."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zO7PyNckWs9J"
},
"outputs": [],
"source": [
"def get_groups(text):\n",
" # Returns a boolean NumPy array of shape (n, k), where n is the number of comments, \n",
" # and k is the number of groups. Each entry (i, j) indicates if the i-th comment \n",
" # contains a term from the j-th group.\n",
" groups = np.zeros((text.shape[0], num_groups))\n",
" for ii in range(num_groups):\n",
" groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)\n",
" return groups\n",
"\n",
"groups_train = get_groups(data_train[\"comment\"])\n",
"groups_test = get_groups(data_test[\"comment\"])\n",
"groups_vali = get_groups(data_vali[\"comment\"])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GFAI6AB9Ws9L"
},
"source": [
"As shown below, all four topic groups constitute only a small fraction of the overall dataset, and have varying proportions of toxic comments."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8Ug4u_P9Ws9M"
},
"outputs": [],
"source": [
"print(\"Overall label proportion = %.1f%%\" % (labels_train.mean() * 100))\n",
"\n",
"group_stats = []\n",
"for ii in range(num_groups):\n",
" group_proportion = groups_train[:, ii].mean()\n",
" group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()\n",
" group_stats.append([group_names[ii],\n",
" \"%.2f%%\" % (group_proportion * 100), \n",
" \"%.1f%%\" % (group_pos_proportion * 100)])\n",
"group_stats = pd.DataFrame(group_stats, \n",
" columns=[\"Topic group\", \"Group proportion\", \"Label proportion\"])\n",
"group_stats"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aG5ZKKrVWs9O"
},
"source": [
"We see that only 1.3% of the dataset contains comments related to sexuality. Among them, 37% of the comments have been annotated as being toxic. Note that this is significantly larger than the overall proportion of comments annotated as toxic. This could be because the few comments that used those ident
gitextract_8nbht4qq/
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── 00-bug-issue.md
│ │ ├── 10-build-installation-issue.md
│ │ ├── 20-documentation-issue.md
│ │ ├── 30-feature-request.md
│ │ ├── 40-performance-issue.md
│ │ └── 50-other-issues.md
│ ├── actions/
│ │ └── setup-env/
│ │ └── action.yml
│ └── workflows/
│ ├── build.yml
│ ├── ci-lint.yml
│ ├── docs.yml
│ └── test.yml
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── RELEASE.md
├── docs/
│ ├── __init__.py
│ ├── guide/
│ │ ├── _index.yaml
│ │ ├── _toc.yaml
│ │ └── guidance.md
│ ├── index.md
│ ├── javascripts/
│ │ └── mathjax.js
│ ├── stylesheets/
│ │ └── extra.css
│ └── tutorials/
│ ├── Facessd_Fairness_Indicators_Example_Colab.ipynb
│ ├── Fairness_Indicators_Example_Colab.ipynb
│ ├── Fairness_Indicators_Pandas_Case_Study.ipynb
│ ├── Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
│ ├── Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb
│ ├── Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb
│ ├── Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb
│ ├── README.md
│ ├── _Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb
│ └── _toc.yaml
├── fairness_indicators/
│ ├── __init__.py
│ ├── example_model.py
│ ├── example_model_test.py
│ ├── fairness_indicators_metrics.py
│ ├── remediation/
│ │ ├── __init__.py
│ │ ├── weight_utils.py
│ │ └── weight_utils_test.py
│ ├── test_cases/
│ │ └── dlvm/
│ │ ├── fairness_indicators_dlvm_test_case.ipynb
│ │ └── fi_test_installed.sh
│ ├── tutorial_utils/
│ │ ├── __init__.py
│ │ ├── util.py
│ │ └── util_test.py
│ └── version.py
├── mkdocs.yml
├── pyproject.toml
├── requirements-docs.txt
├── setup.py
└── tensorboard_plugin/
├── README.md
├── pytest.ini
├── setup.py
└── tensorboard_plugin_fairness_indicators/
├── RELEASE.md
├── __init__.py
├── demo.py
├── metadata.py
├── metadata_test.py
├── plugin.py
├── plugin_test.py
├── static/
│ └── index.js
├── summary_v2.py
├── summary_v2_test.py
└── version.py
SYMBOL INDEX (87 symbols across 17 files)
FILE: fairness_indicators/example_model.py
class ExampleParser (line 40) | class ExampleParser(keras.layers.Layer):
method __init__ (line 43) | def __init__(self, input_feature_key):
method compute_output_shape (line 48) | def compute_output_shape(self, input_shape: Any):
method call (line 51) | def call(self, serialized_examples):
class Reshaper (line 62) | class Reshaper(keras.layers.Layer):
method call (line 65) | def call(self, inputs):
class Caster (line 69) | class Caster(keras.layers.Layer):
method call (line 72) | def call(self, inputs):
function get_example_model (line 76) | def get_example_model(input_feature_key: str):
function evaluate_model (line 112) | def evaluate_model(
FILE: fairness_indicators/example_model_test.py
class ExampleModelTest (line 37) | class ExampleModelTest(tf.test.TestCase):
method setUp (line 38) | def setUp(self):
method _create_example (line 46) | def _create_example(self, comment_text, label, slice_value):
method _create_data (line 57) | def _create_data(self):
method _write_tf_records (line 73) | def _write_tf_records(self, examples):
method test_example_model (line 80) | def test_example_model(self):
FILE: fairness_indicators/fairness_indicators_metrics.py
class FairnessIndicators (line 43) | class FairnessIndicators(metric_types.Metric):
method computations_with_logging (line 46) | def computations_with_logging(self):
method __init__ (line 80) | def __init__(
function calculate_digits (line 97) | def calculate_digits(thresholds):
function _fairness_indicators_metrics_at_thresholds (line 102) | def _fairness_indicators_metrics_at_thresholds(
FILE: fairness_indicators/remediation/weight_utils.py
function create_percentage_difference_dictionary (line 8) | def create_percentage_difference_dictionary(
function _get_metric_value (line 39) | def _get_metric_value(
function get_baseline_value (line 77) | def get_baseline_value(
FILE: fairness_indicators/remediation/weight_utils_test.py
class WeightUtilsTest (line 12) | class WeightUtilsTest(tf.test.TestCase):
method create_eval_result (line 13) | def create_eval_result(self):
method create_bounded_result (line 61) | def create_bounded_result(self):
method test_baseline (line 145) | def test_baseline(self):
method test_get_metric_value_raise_key_error (line 189) | def test_get_metric_value_raise_key_error(self):
method test_get_metric_value_raise_unsupported_value (line 195) | def test_get_metric_value_raise_unsupported_value(self):
method test_get_metric_value_raise_empty_dict (line 201) | def test_get_metric_value_raise_empty_dict(self):
method test_create_difference_dictionary (line 205) | def test_create_difference_dictionary(self):
method test_create_difference_dictionary_baseline (line 216) | def test_create_difference_dictionary_baseline(self):
method test_create_difference_dictionary_bounded_metrics (line 229) | def test_create_difference_dictionary_bounded_metrics(self):
FILE: fairness_indicators/tutorial_utils/util.py
function convert_comments_data (line 64) | def convert_comments_data(input_filename, output_filename=None):
function _convert_comments_data_tfrecord (line 108) | def _convert_comments_data_tfrecord(input_filename, output_filename=None):
function _convert_comments_data_csv (line 144) | def _convert_comments_data_csv(input_filename, output_filename=None):
function get_eval_results (line 180) | def get_eval_results(
FILE: fairness_indicators/tutorial_utils/util_test.py
class UtilTest (line 30) | class UtilTest(tf.test.TestCase):
method _create_example_tfrecord (line 31) | def _create_example_tfrecord(self):
method _write_tf_records (line 101) | def _write_tf_records(self, examples):
method test_convert_data_tfrecord (line 108) | def test_convert_data_tfrecord(self):
method _create_example_csv (line 152) | def _create_example_csv(self, use_fake_embedding=False):
method _write_csv (line 244) | def _write_csv(self, examples):
method test_convert_data_csv (line 253) | def test_convert_data_csv(self):
method test_get_eval_results_called_correclty (line 294) | def test_get_eval_results_called_correclty(
FILE: setup.py
function select_constraint (line 27) | def select_constraint(default, nightly=None, git_master=None):
FILE: tensorboard_plugin/setup.py
function select_constraint (line 26) | def select_constraint(default, nightly=None, git_master=None):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py
function main (line 35) | def main(unused_argv):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata.py
function CreateSummaryMetadata (line 22) | def CreateSummaryMetadata(description=None):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata_test.py
class MetadataTest (line 22) | class MetadataTest(tf.test.TestCase):
method testCreateSummaryMetadata (line 23) | def testCreateSummaryMetadata(self):
method testCreateSummaryMetadata_withoutDescription (line 28) | def testCreateSummaryMetadata_withoutDescription(self):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin.py
function stringify_slice_key_value (line 38) | def stringify_slice_key_value(
function _add_cross_slice_key_data (line 88) | def _add_cross_slice_key_data(
function convert_slicing_metrics_to_ui_input (line 124) | def convert_slicing_metrics_to_ui_input(
class FairnessIndicatorsPlugin (line 201) | class FairnessIndicatorsPlugin(base_plugin.TBPlugin):
method __init__ (line 206) | def __init__(self, context):
method get_plugin_apps (line 216) | def get_plugin_apps(self):
method frontend_metadata (line 233) | def frontend_metadata(self):
method is_active (line 242) | def is_active(self):
method _serve_js (line 260) | def _serve_js(self, request):
method _serve_vulcanized_js (line 269) | def _serve_vulcanized_js(self, request):
method _get_evaluation_result (line 277) | def _get_evaluation_result(self, request):
method _get_output_file_format (line 299) | def _get_output_file_format(self, evaluation_output_path):
method _get_evaluation_result_from_remote_path (line 307) | def _get_evaluation_result_from_remote_path(self, request):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin_test.py
class PluginTest (line 43) | class PluginTest(tf.test.TestCase):
method setUp (line 46) | def setUp(self):
method tearDown (line 75) | def tearDown(self):
method _export_keras_model (line 79) | def _export_keras_model(self, classifier):
method _write_tf_examples_to_tfrecords (line 85) | def _write_tf_examples_to_tfrecords(self, examples):
method _make_example (line 92) | def _make_example(self, age, language, label):
method _make_eval_config (line 101) | def _make_eval_config(self):
method testRoutes (line 122) | def testRoutes(self):
method testIsActive (line 135) | def testIsActive(self, get_random_stub): # pylint: disable=unused-arg...
method testIsInactive (line 141) | def testIsInactive(self, get_random_stub): # pylint: disable=unused-a...
method testIndexJsRoute (line 144) | def testIndexJsRoute(self):
method testVulcanizedTemplateRoute (line 155) | def testVulcanizedTemplateRoute(self):
method testGetEvalResultsRoute (line 162) | def testGetEvalResultsRoute(self):
method testGetEvalResultsFromURLRoute (line 191) | def testGetEvalResultsFromURLRoute(self):
method testGetOutputFileFormat (line 222) | def testGetOutputFileFormat(self):
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/static/index.js
function render (line 18) | async function render() {
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2.py
function FairnessIndicators (line 22) | def FairnessIndicators(eval_result_output_dir, step=None, description=No...
FILE: tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2_test.py
class SummaryV2Test (line 38) | class SummaryV2Test(tf.test.TestCase):
method _write_summary (line 39) | def _write_summary(self, eval_result_output_dir):
method _get_event (line 45) | def _get_event(self):
method testSummary (line 53) | def testSummary(self):
Condensed preview — 64 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (489K chars).
[
{
"path": ".github/ISSUE_TEMPLATE/00-bug-issue.md",
"chars": 798,
"preview": "---\nname: Bug Issue\nabout: Use this template for reporting a bug\nlabels: 'type:bug'\n\n---\n**System information**\n- Have I"
},
{
"path": ".github/ISSUE_TEMPLATE/10-build-installation-issue.md",
"chars": 608,
"preview": "---\nname: Build/Installation Issue\nabout: Use this template for build/installation issues\nlabels: 'type:build/install'\n\n"
},
{
"path": ".github/ISSUE_TEMPLATE/20-documentation-issue.md",
"chars": 1024,
"preview": "---\nname: Documentation Issue\nabout: Use this template for documentation related issues\nlabels: 'type:docs'\n\n---\nThe Fai"
},
{
"path": ".github/ISSUE_TEMPLATE/30-feature-request.md",
"chars": 320,
"preview": "---\nname: Feature Request\nabout: Use this template for raising a feature request\nlabels: 'type:feature'\n\n---\n**Describe "
},
{
"path": ".github/ISSUE_TEMPLATE/40-performance-issue.md",
"chars": 827,
"preview": "---\nname: Performance Issue\nabout: Use this template for reporting a performance issue\nlabels: 'type:performance'\n\n---\n*"
},
{
"path": ".github/ISSUE_TEMPLATE/50-other-issues.md",
"chars": 196,
"preview": "---\nname: Other Issues\nabout: Use this template for any other non-support related issues\nlabels: 'type:others'\n\n---\n\nThi"
},
{
"path": ".github/actions/setup-env/action.yml",
"chars": 641,
"preview": "name: Set up environment\ndescription: Set up environment and install package\n\ninputs:\n python-version:\n default: \"3."
},
{
"path": ".github/workflows/build.yml",
"chars": 1814,
"preview": "name: Build\n\non:\n push:\n branches:\n - master\n pull_request:\n branches:\n - master\n workflow_dispatch:\n"
},
{
"path": ".github/workflows/ci-lint.yml",
"chars": 506,
"preview": "name: pre-commit\n\non:\n pull_request:\n push:\n branches: [master]\n\njobs:\n pre-commit:\n runs-on: ubuntu-latest\n "
},
{
"path": ".github/workflows/docs.yml",
"chars": 1359,
"preview": "name: Deploy docs\non:\n workflow_dispatch:\n push:\n branches:\n - 'master'\n pull_request:\npermissions:\n content"
},
{
"path": ".github/workflows/test.yml",
"chars": 782,
"preview": "name: Tests\non:\n push:\n paths-ignore:\n - '**.md'\n - 'docs/**'\n pull_request:\n branches: [ master ]\n "
},
{
"path": ".pre-commit-config.yaml",
"chars": 1129,
"preview": "# pre-commit is a tool to perform a predefined set of tasks manually and/or\n# automatically before git commits are made."
},
{
"path": "CONTRIBUTING.md",
"chars": 969,
"preview": "# How to Contribute\n\nWe'd love to accept your patches and contributions to this project. There are\njust a few small guid"
},
{
"path": "LICENSE",
"chars": 14103,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 11288,
"preview": "# Fairness Indicators\n\n -->\n\n# Current Version (Still in Development)\n\n## Major Features and Improvements\n\n"
},
{
"path": "docs/__init__.py",
"chars": 596,
"preview": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# "
},
{
"path": "docs/guide/_index.yaml",
"chars": 4738,
"preview": "book_path: /responsible_ai/_book.yaml\nproject_path: /responsible_ai/_project.yaml\ntitle: Fairness Indicators\nlanding_pag"
},
{
"path": "docs/guide/_toc.yaml",
"chars": 177,
"preview": "toc:\n- title: Overview\n path: /responsible_ai/fairness_indicators/guide/\n- title: Thinking about fairness evaluation\n "
},
{
"path": "docs/guide/guidance.md",
"chars": 19500,
"preview": "# Fairness Indicators: Thinking about Fairness Evaluation\n\nFairness Indicators is a useful tool for evaluating _binary_ "
},
{
"path": "docs/index.md",
"chars": 3856,
"preview": "# Fairness Indicators\n\n/// html | div[style='float: left; width: 50%;']\nFairness Indicators is a library that enables ea"
},
{
"path": "docs/javascripts/mathjax.js",
"chars": 384,
"preview": "window.MathJax = {\n tex: {\n inlineMath: [[\"\\\\(\", \"\\\\)\"]],\n displayMath: [[\"\\\\[\", \"\\\\]\"]],\n processEscapes: tru"
},
{
"path": "docs/stylesheets/extra.css",
"chars": 782,
"preview": ":root {\n --md-primary-fg-color: #FFA800;\n --md-primary-fg-color--light: #CCCCCC;\n --md-primary-fg-color--dark:"
},
{
"path": "docs/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb",
"chars": 15705,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"Sxt-9qpNgPxo\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_Example_Colab.ipynb",
"chars": 28717,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"Tce3stUlHN0L\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb",
"chars": 18817,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"Bfrh3DUze0QN\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb",
"chars": 40197,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"JmvzTcYice-_\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb",
"chars": 40344,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"jMqk3Z8EciF8\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb",
"chars": 11945,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"_E4uORykIpG4\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/Fairness_Indicators_on_TF_Hub_Text_Embeddings.ipynb",
"chars": 18109,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"Tce3stUlHN0L\"\n },\n \"source\": [\n \"##### "
},
{
"path": "docs/tutorials/README.md",
"chars": 1125,
"preview": "The demos listed here are designed to be used with [Google Colaboratory](https://colab.research.google.com/notebooks/wel"
},
{
"path": "docs/tutorials/_Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb",
"chars": 74791,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"ueCj9KW2QTCP\"\n },\n \"sou"
},
{
"path": "docs/tutorials/_toc.yaml",
"chars": 980,
"preview": "toc:\n- title: Introduction to Fairness Indicators\n path: /responsible_ai/fairness_indicators/tutorials/Fairness_Indicat"
},
{
"path": "fairness_indicators/__init__.py",
"chars": 717,
"preview": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# "
},
{
"path": "fairness_indicators/example_model.py",
"chars": 4398,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "fairness_indicators/example_model_test.py",
"chars": 5931,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "fairness_indicators/fairness_indicators_metrics.py",
"chars": 7487,
"preview": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
},
{
"path": "fairness_indicators/remediation/__init__.py",
"chars": 596,
"preview": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# "
},
{
"path": "fairness_indicators/remediation/weight_utils.py",
"chars": 3751,
"preview": "\"\"\"Utilities to suggest weights based on model analysis results.\"\"\"\n\nfrom typing import Any, Dict, Mapping\n\nimport tenso"
},
{
"path": "fairness_indicators/remediation/weight_utils_test.py",
"chars": 9324,
"preview": "\"\"\"Tests for fairness_indicators.remediation.weight_utils.\"\"\"\n\nimport collections\n\nimport tensorflow.compat.v1 as tf\n\nfr"
},
{
"path": "fairness_indicators/test_cases/dlvm/fairness_indicators_dlvm_test_case.ipynb",
"chars": 8688,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"Tce3stUlHN0L\"\n },\n \"sou"
},
{
"path": "fairness_indicators/test_cases/dlvm/fi_test_installed.sh",
"chars": 2992,
"preview": "#!/bin/bash\n#\n# Copyright 2021 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "
},
{
"path": "fairness_indicators/tutorial_utils/__init__.py",
"chars": 758,
"preview": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# "
},
{
"path": "fairness_indicators/tutorial_utils/util.py",
"chars": 7694,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "fairness_indicators/tutorial_utils/util_test.py",
"chars": 11635,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "fairness_indicators/version.py",
"chars": 722,
"preview": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# "
},
{
"path": "mkdocs.yml",
"chars": 2643,
"preview": "site_name: Fairness Indicators\nrepo_name: \"fairness-indicators\"\nrepo_url: https://github.com/tensorflow/fairness-indicat"
},
{
"path": "pyproject.toml",
"chars": 3840,
"preview": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
},
{
"path": "requirements-docs.txt",
"chars": 38,
"preview": "mkdocs\nmkdocs-material\nmkdocs-jupyter\n"
},
{
"path": "setup.py",
"chars": 3714,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/README.md",
"chars": 8332,
"preview": "# Evaluating Models with the Fairness Indicators Dashboard [Beta]\n\n -->\n\n# Current Version (Still in Development)\n\n## Major Features and Improvements\n\n"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/__init__.py",
"chars": 689,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/demo.py",
"chars": 1381,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata.py",
"chars": 1034,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/metadata_test.py",
"chars": 1373,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin.py",
"chars": 11580,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/plugin_test.py",
"chars": 8805,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/static/index.js",
"chars": 1033,
"preview": "\n// Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n//\n// Licensed under the Apache License, Version 2.0 (th"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2.py",
"chars": 2104,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/summary_v2_test.py",
"chars": 2301,
"preview": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"L"
},
{
"path": "tensorboard_plugin/tensorboard_plugin_fairness_indicators/version.py",
"chars": 721,
"preview": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this f"
}
]
About this extraction
This page contains the full source code of the tensorflow/fairness-indicators GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 64 files (442.6 KB), approximately 117.4k tokens, and a symbol index with 87 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.