Full Code of understand-ai/anonymizer for AI

master 0c9686a313aa cached
30 files
47.5 KB
11.9k tokens
50 symbols
1 requests
Download .txt
Repository: understand-ai/anonymizer
Branch: master
Commit: 0c9686a313aa
Files: 30
Total size: 47.5 KB

Directory structure:
gitextract_8otnw0t9/

├── .flake8
├── .gitignore
├── LICENSE
├── README.md
├── anonymizer/
│   ├── __init__.py
│   ├── anonymization/
│   │   ├── __init__.py
│   │   └── anonymizer.py
│   ├── bin/
│   │   ├── __init__.py
│   │   └── anonymize.py
│   ├── detection/
│   │   ├── __init__.py
│   │   ├── detector.py
│   │   └── weights.py
│   ├── obfuscation/
│   │   ├── __init__.py
│   │   ├── helpers.py
│   │   └── obfuscator.py
│   └── utils/
│       ├── __init__.py
│       └── box.py
├── pytest.ini
├── requirements.txt
├── setup.py
└── test/
    ├── __init__.py
    ├── anonymization/
    │   ├── __init__.py
    │   └── anonymizer_test.py
    ├── detection/
    │   ├── __init__.py
    │   ├── detector_test.py
    │   └── weights_test.py
    ├── obfuscation/
    │   ├── __init__.py
    │   └── obfuscator_test.py
    └── utils/
        ├── __init__.py
        └── box_test.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .flake8
================================================
[flake8]
max-line-length = 119


================================================
FILE: .gitignore
================================================
__pycache__
/weights

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS


================================================
FILE: README.md
================================================
___

⚠️  **ARCHIVED REPOSITORY** ⚠️   

**We decided to archive this repository to make it read-only and indicate that it's no longer actively maintained.**
___

# understand.ai Anonymizer [ARCHIVED]

To improve privacy and make it easier for companies to comply with GDPR, we at [understand.ai](https://understand.ai/) decided to open-source our anonymization software and weights for a model trained on our in-house datasets for faces and license plates.
The model is trained with the [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to make it easy for everyone to use these weights in their projects.

Our anonymizer is used for projects with some of Germany's largest car manufacturers and suppliers,
but we are sure there are many more applications.  

## Disclaimer

Please note that the version here is not identical to the anonymizer we use in customer projects. This model is an early version in terms of quality and speed. The code is written for easy-of-use instead of speed.  
For this reason, no multiprocessing code or batched detection and blurring are used in this repository.

This version of our anonymizer is trained to detect faces and license plates in images recorded with sensors 
typically used in autonomous vehicles. It will not work on low-quality or grayscale images and will also not work on 
fish-eye or other extreme camera configurations.


## Examples

![License Plate Example Raw](images/coco02.jpg?raw=true "Title")
![License Plate Anonymized](images/coco02_anonymized.jpg?raw=true "Title")

![Face Example Raw](images/coco01.jpg?raw=true "Title")
![Face Example Anonymized](images/coco01_anonymized.jpg?raw=true "Title")


## Installation

To install the anonymizer just clone this repository, create a new python3.6 environment and install the dependencies.  
The sequence of commands to do all this is

```bash
python -m venv ~/.virtualenvs/anonymizer
source ~/.virtualenvs/anonymizer/bin/activate

git clone https://github.com/understand-ai/anonymizer
cd anonymizer

pip install --upgrade pip
pip install -r requirements.txt
```

To make sure everything is working as intended run the test suite with the following command

```bash
pytest
```

Running the test cases can take several minutes and is dependent on your GPU (or CPU) and internet speed.  
Some test cases download model weights and some perform inference to make sure everything works as intended.

## Weights
[weights_face_v1.0.0.pb](https://drive.google.com/file/d/1CwChAYxJo3mON6rcvXsl82FMSKj82vxF)

[weights_plate_v1.0.0.pb](https://drive.google.com/file/d/1Fls9FYlQdRlLAtw-GVS_ie1oQUYmci9g)


## Usage

In case you want to run the model on CPU, make sure that you install `tensorflow` instead of `tensorflow-gpu` listed
in the `requirements.txt`.

Since the weights will be downloaded automatically all that is needed to anonymize images is to run

```bash
PYTHONPATH=$PYTHONPATH:. python anonymizer/bin/anonymize.py --input /path/to/input_folder --image-output /path/to/output_folder --weights /path/to/store/weights
```

from the top folder of this repository. This will save both anonymized images and detection results as json-files to
the output folder.

### Advanced Usage

In case you do not want to save the detections to json, add the parameter `no-write-detections`.
Example:

```bash
PYTHONPATH=$PYTHONPATH:. python anonymizer/bin/anonymize.py --input /path/to/input_folder --image-output /path/to/output_folder --weights /path/to/store/weights --no-write-detections
```

Detection threshold for faces and license plates can be passed as additional parameters.
Both are floats in [0.001, 1.0]. Example:

```bash
PYTHONPATH=$PYTHONPATH:. python anonymizer/bin/anonymize.py --input /path/to/input_folder --image-output /path/to/output_folder --weights /path/to/store/weights --face-threshold=0.1 --plate-threshold=0.9
```

By default only `*.jpg` and `*.png` files are anonymized. To for instance only anonymize jpgs and tiffs, 
the parameter `image-extensions` can be used. Example:

```bash
PYTHONPATH=$PYTHONPATH:. python anonymizer/bin/anonymize.py --input /path/to/input_folder --image-output /path/to/output_folder --weights /path/to/store/weights --image-extensions=jpg,tiff
```

The parameters for the blurring can be changed as well. For this the parameter `obfuscation-kernel` is used.
It consists of three values: The size of the gaussian kernel used for blurring, it's standard deviation and the size
of another kernel that is used to make the transition between blurred and non-blurred regions smoother.
Example usage:

```bash
PYTHONPATH=$PYTHONPATH:. python anonymizer/bin/anonymize.py --input /path/to/input_folder --image-output /path/to/output_folder --weights /path/to/store/weights --obfuscation-kernel="65,3,19"
```

## Attributions

An image for one of the test cases was taken from the COCO dataset.  
The pictures in this README are under an [Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
You can find the pictures [here](http://farm4.staticflickr.com/3081/2289618559_2daf30a365_z.jpg) and [here](http://farm8.staticflickr.com/7062/6802736606_ed325d0452_z.jpg).


================================================
FILE: anonymizer/__init__.py
================================================


================================================
FILE: anonymizer/anonymization/__init__.py
================================================
from anonymizer.anonymization.anonymizer import Anonymizer

__all__ = ['Anonymizer']


================================================
FILE: anonymizer/anonymization/anonymizer.py
================================================
import json
from pathlib import Path

import numpy as np
from PIL import Image
from tqdm import tqdm


def load_np_image(image_path):
    image = Image.open(image_path).convert('RGB')
    np_image = np.array(image)
    return np_image


def save_np_image(image, image_path):
    pil_image = Image.fromarray((image).astype(np.uint8), mode='RGB')
    pil_image.save(image_path)


def save_detections(detections, detections_path):
    json_output = []
    for box in detections:
        json_output.append({
            'y_min': box.y_min,
            'x_min': box.x_min,
            'y_max': box.y_max,
            'x_max': box.x_max,
            'score': box.score,
            'kind': box.kind
        })
    with open(detections_path, 'w') as output_file:
        json.dump(json_output, output_file, indent=2)


class Anonymizer:
    def __init__(self, detectors, obfuscator):
        self.detectors = detectors
        self.obfuscator = obfuscator

    def anonymize_image(self, image, detection_thresholds):
        assert set(self.detectors.keys()) == set(detection_thresholds.keys()),\
            'Detector names must match detection threshold names'
        detected_boxes = []
        for kind, detector in self.detectors.items():
            new_boxes = detector.detect(image, detection_threshold=detection_thresholds[kind])
            detected_boxes.extend(new_boxes)
        return self.obfuscator.obfuscate(image, detected_boxes), detected_boxes

    def anonymize_images(self, input_path, output_path, detection_thresholds, file_types, write_json):
        print(f'Anonymizing images in {input_path} and saving the anonymized images to {output_path}...')

        Path(output_path).mkdir(exist_ok=True)
        assert Path(output_path).is_dir(), 'Output path must be a directory'

        files = []
        for file_type in file_types:
            files.extend(list(Path(input_path).glob(f'**/*.{file_type}')))

        for input_image_path in tqdm(files):
            # Create output directory
            relative_path = input_image_path.relative_to(input_path)
            (Path(output_path) / relative_path.parent).mkdir(exist_ok=True, parents=True)
            output_image_path = Path(output_path) / relative_path
            output_detections_path = (Path(output_path) / relative_path).with_suffix('.json')

            # Anonymize image
            image = load_np_image(str(input_image_path))
            anonymized_image, detections = self.anonymize_image(image=image, detection_thresholds=detection_thresholds)
            save_np_image(image=anonymized_image, image_path=str(output_image_path))
            if write_json:
                save_detections(detections=detections, detections_path=str(output_detections_path))


================================================
FILE: anonymizer/bin/__init__.py
================================================


================================================
FILE: anonymizer/bin/anonymize.py
================================================
"""
Copyright 2018 understand.ai

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""


import argparse

from anonymizer.anonymization import Anonymizer
from anonymizer.detection import Detector, download_weights, get_weights_path
from anonymizer.obfuscation import Obfuscator


def parse_args():
    parser = argparse.ArgumentParser(
        description='Anonymize faces and license plates in a series of images.')
    parser.add_argument('--input', required=True,
                        metavar='/path/to/input_folder',
                        help='Path to a folder that contains the images that should be anonymized. '
                             'Images can be arbitrarily nested in subfolders and will still be found.')
    parser.add_argument('--image-output', required=True,
                        metavar='/path/to/output_foler',
                        help='Path to the folder the anonymized images should be written to. '
                             'Will mirror the folder structure of the input folder.')
    parser.add_argument('--weights', required=True,
                        metavar='/path/to/weights_foler',
                        help='Path to the folder where the weights are stored. If no weights with the '
                             'appropriate names are found they will be downloaded automatically.')
    parser.add_argument('--image-extensions', required=False, default='jpg,png',
                        metavar='"jpg,png"',
                        help='Comma-separated list of file types that will be anonymized')
    parser.add_argument('--face-threshold', type=float, required=False, default=0.3,
                        metavar='0.3',
                        help='Detection confidence needed to anonymize a detected face. '
                             'Must be in [0.001, 1.0]')
    parser.add_argument('--plate-threshold', type=float, required=False, default=0.3,
                        metavar='0.3',
                        help='Detection confidence needed to anonymize a detected license plate. '
                             'Must be in [0.001, 1.0]')
    parser.add_argument('--write-detections', dest='write_detections', action='store_true')
    parser.add_argument('--no-write-detections', dest='write_detections', action='store_false')
    parser.set_defaults(write_detections=True)
    parser.add_argument('--obfuscation-kernel', required=False, default='21,2,9',
                        metavar='kernel_size,sigma,box_kernel_size',
                        help='This parameter is used to change the way the blurring is done. '
                             'For blurring a gaussian kernel is used. The default size of the kernel is 21 pixels '
                             'and the default value for the standard deviation of the distribution is 2. '
                             'Higher values of the first parameter lead to slower transitions while blurring and '
                             'larger values of the second parameter lead to sharper edges and less blurring. '
                             'To make the transition from blurred areas to the non-blurred image smoother another '
                             'kernel is used which has a default size of 9. Larger values lead to a smoother '
                             'transition. Both kernel sizes must be odd numbers.')
    args = parser.parse_args()

    print(f'input: {args.input}')
    print(f'image-output: {args.image_output}')
    print(f'weights: {args.weights}')
    print(f'image-extensions: {args.image_extensions}')
    print(f'face-threshold: {args.face_threshold}')
    print(f'plate-threshold: {args.plate_threshold}')
    print(f'write-detections: {args.write_detections}')
    print(f'obfuscation-kernel: {args.obfuscation_kernel}')
    print()

    return args


def main(input_path, image_output_path, weights_path, image_extensions, face_threshold, plate_threshold,
         write_json, obfuscation_parameters):
    download_weights(download_directory=weights_path)

    kernel_size, sigma, box_kernel_size = obfuscation_parameters.split(',')
    obfuscator = Obfuscator(kernel_size=int(kernel_size), sigma=float(sigma), box_kernel_size=int(box_kernel_size))
    detectors = {
        'face': Detector(kind='face', weights_path=get_weights_path(weights_path, kind='face')),
        'plate': Detector(kind='plate', weights_path=get_weights_path(weights_path, kind='plate'))
    }
    detection_thresholds = {
        'face': face_threshold,
        'plate': plate_threshold
    }
    anonymizer = Anonymizer(obfuscator=obfuscator, detectors=detectors)
    anonymizer.anonymize_images(input_path=input_path, output_path=image_output_path,
                                detection_thresholds=detection_thresholds, file_types=image_extensions.split(','),
                                write_json=write_json)


if __name__ == '__main__':
    args = parse_args()
    main(input_path=args.input, image_output_path=args.image_output, weights_path=args.weights,
         image_extensions=args.image_extensions,
         face_threshold=args.face_threshold, plate_threshold=args.plate_threshold,
         write_json=args.write_detections, obfuscation_parameters=args.obfuscation_kernel)


================================================
FILE: anonymizer/detection/__init__.py
================================================
from anonymizer.detection.detector import Detector
from anonymizer.detection.weights import download_weights, get_weights_path

__all__ = ['Detector', 'download_weights', 'get_weights_path']


================================================
FILE: anonymizer/detection/detector.py
================================================
import numpy as np
import tensorflow as tf

from anonymizer.utils import Box


class Detector:
    def __init__(self, kind, weights_path):
        self.kind = kind

        self.detection_graph = tf.Graph()
        with self.detection_graph.as_default():
            od_graph_def = tf.GraphDef()
            with tf.gfile.GFile(weights_path, 'rb') as fid:
                serialized_graph = fid.read()
                od_graph_def.ParseFromString(serialized_graph)
                tf.import_graph_def(od_graph_def, name='')

        conf = tf.ConfigProto()
        self.session = tf.Session(graph=self.detection_graph, config=conf)

    def _convert_boxes(self, num_boxes, scores, boxes, image_height, image_width, detection_threshold):
        assert detection_threshold >= 0.001, 'Threshold can not be too close to "0".'

        result_boxes = []
        for i in range(int(num_boxes)):
            score = float(scores[i])
            if score < detection_threshold:
                continue

            y_min, x_min, y_max, x_max = map(float, boxes[i].tolist())
            box = Box(y_min=y_min * image_height, x_min=x_min * image_width,
                      y_max=y_max * image_height, x_max=x_max * image_width,
                      score=score, kind=self.kind)
            result_boxes.append(box)
        return result_boxes

    def detect(self, image, detection_threshold):
        image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
        num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
        detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
        detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')

        image_height, image_width, channels = image.shape
        assert channels == 3, f'Invalid number of channels: {channels}. ' \
                              f'Only images with three color channels are supported.'

        np_images = np.array([image])
        num_boxes, scores, boxes = self.session.run(
            [num_detections, detection_scores, detection_boxes],
            feed_dict={image_tensor: np_images})

        converted_boxes = self._convert_boxes(num_boxes=num_boxes[0], scores=scores[0], boxes=boxes[0],
                                              image_height=image_height, image_width=image_width,
                                              detection_threshold=detection_threshold)
        return converted_boxes


================================================
FILE: anonymizer/detection/weights.py
================================================
from pathlib import Path

from google_drive_downloader import GoogleDriveDownloader as gdd


WEIGHTS_GDRIVE_IDS = {
    '1.0.0': {
        'face': '1CwChAYxJo3mON6rcvXsl82FMSKj82vxF',
        'plate': '1Fls9FYlQdRlLAtw-GVS_ie1oQUYmci9g'
    }
}


def get_weights_path(base_path, kind, version='1.0.0'):
    assert version in WEIGHTS_GDRIVE_IDS.keys(), f'Invalid weights version "{version}"'
    assert kind in WEIGHTS_GDRIVE_IDS[version].keys(), f'Invalid weights kind "{kind}"'

    return str(Path(base_path) / f'weights_{kind}_v{version}.pb')


def _download_single_model_weights(download_directory, kind, version):
    file_id = WEIGHTS_GDRIVE_IDS[version][kind]
    weights_path = get_weights_path(base_path=download_directory, kind=kind, version=version)
    if Path(weights_path).exists():
        return

    print(f'Downloading {kind} weights to {weights_path}')
    gdd.download_file_from_google_drive(file_id=file_id, dest_path=weights_path, unzip=False)


def download_weights(download_directory, version='1.0.0'):
    for kind in ['face', 'plate']:
        _download_single_model_weights(download_directory=download_directory, kind=kind, version=version)


================================================
FILE: anonymizer/obfuscation/__init__.py
================================================
from anonymizer.obfuscation.obfuscator import Obfuscator

__all__ = ['Obfuscator']


================================================
FILE: anonymizer/obfuscation/helpers.py
================================================
import numpy as np
import tensorflow as tf


def kernel_initializer(kernels):
    """ Wrapper for an initializer of convolution weights.

    :return: Callable initializer object.
    """
    assert len(kernels.shape) == 3
    kernels = kernels.astype(np.float32)

    def _initializer(shape, dtype=tf.float32, partition_info=None):
        """Initializer function which is called from tensorflow internally.

        :param shape: Runtime / Construction time shape of the tensor.
        :param dtype: Data type of the resulting tensor.
        :param partition_info: Placeholder for internal tf call.
        :return: 4D numpy array with weights [filter_height, filter_width, in_channels, out_channels].
        """
        if shape:
            # second last dimension is input, last dimension is output
            fan_in = float(shape[-2]) if len(shape) > 1 else float(shape[-1])
            fan_out = float(shape[-1])
        else:
            fan_in = 1.0
            fan_out = 1.0

        assert fan_out == 1 and fan_in == kernels.shape[-1]

        # define weight matrix (set dtype always to float32)
        # weights = np.expand_dims(kernels, axis=2)
        weights = np.expand_dims(kernels, axis=-1)

        return weights

    return _initializer


def bilinear_filter(filter_size=(4, 4)):
    """
    Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size.
    Also allows asymmetric kernels.

    :param filter_size: Tuple defining the filter size in width and height.

    :return: 2D numpy array containing bilinear weights.
    """
    assert isinstance(filter_size, (list, tuple)) and len(filter_size) == 2

    factor = [(size + 1) // 2 for size in filter_size]
    # define first center dimension
    if filter_size[0] % 2 == 1:
        center_x = factor[0] - 1
    else:
        center_x = factor[0] - 0.5
    # define second center dimension
    if filter_size[1] % 2 == 1:
        center_y = factor[1] - 1
    else:
        center_y = factor[1] - 0.5

    og = np.ogrid[:filter_size[0], :filter_size[1]]
    kernel = (1 - abs(og[0] - center_x) / float(factor[0])) * (1 - abs(og[1] - center_y) / float(factor[1]))

    return kernel


def get_default_session_config(memory_fraction=0.9):
    """ Returns default session configuration

    :param memory_fraction: percentage of the memory which should be kept free (growing is allowed).
    :return: tensorflow session configuration object
    """
    conf = tf.ConfigProto()
    conf.gpu_options.per_process_gpu_memory_fraction = memory_fraction
    conf.gpu_options.allocator_type = 'BFC'
    conf.gpu_options.allow_growth = True
    conf.allow_soft_placement = True


================================================
FILE: anonymizer/obfuscation/obfuscator.py
================================================
import math

import numpy as np
import scipy.stats as st
import tensorflow as tf

from anonymizer.obfuscation.helpers import kernel_initializer, bilinear_filter, get_default_session_config


class Obfuscator:
    """ This class is used to blur box regions within an image with gaussian blurring. """
    def __init__(self, kernel_size=21, sigma=2, channels=3, box_kernel_size=9, smooth_boxes=True):
        """
        :param kernel_size: Size of the blurring kernel.
        :param sigma: standard deviation of the blurring kernel. Higher values lead to sharper edges, less blurring.
        :param channels: Number of image channels this blurrer will be used for. This is fixed as blurring kernels will
            be created for each channel only once.
        :param box_kernel_size: This parameter is only used when smooth_boxes is True. In this case, a smoothing
            operation is applied on the bounding box mask to create smooth transitions from blurred to normal image at
            the bounding box borders.
        :param smooth_boxes: Flag defining if bounding box masks borders should be smoothed.
        """
        # Kernel must be uneven because of a simplified padding scheme
        assert kernel_size % 2 == 1

        self.kernel_size = kernel_size
        self.box_kernel_size = box_kernel_size
        self.sigma = sigma
        self.channels = channels
        self.smooth_boxes = smooth_boxes

        # create internal kernels (3D kernels with the channels in the last dimension)
        kernel = self._gaussian_kernel(kernel_size=self.kernel_size, sigma=self.sigma)  # kernel for blurring
        self.kernels = np.repeat(kernel, repeats=channels, axis=-1).reshape((kernel_size, kernel_size, channels))
        mean_kernel = bilinear_filter(filter_size=(box_kernel_size, box_kernel_size))  # kernel for smoothing
        self.mean_kernel = np.expand_dims(mean_kernel/np.sum(mean_kernel), axis=-1)

        # visualization
        # print(self.kernels.shape)
        # self._visualize_kernel(kernel=self.kernels[..., 0])
        # self._visualize_kernel(kernel=self.mean_kernel[..., 0])

        # wrap everything in a tf session which is always open
        sess = tf.Session(config=get_default_session_config(0.9))
        self._build_graph()
        init_op = tf.global_variables_initializer()
        sess.run(init_op)

        self.sess = sess

    def _gaussian_kernel(self, kernel_size=30, sigma=5):
        """ Returns a 2D Gaussian kernel array.

        :param kernel_size: Size of the kernel, the resulting array will be kernel_size x kernel_size
        :param sigma: Standard deviation of the gaussian kernel.
        :return: 2D numpy array containing a gaussian kernel.
        """

        interval = (2 * sigma + 1.) / kernel_size
        x = np.linspace(-sigma - interval / 2., sigma + interval / 2., kernel_size + 1)
        kern1d = np.diff(st.norm.cdf(x))
        kernel_raw = np.sqrt(np.outer(kern1d, kern1d))
        kernel = kernel_raw / kernel_raw.sum()

        return kernel

    def _build_graph(self):
        """ Builds the tensorflow graph containing all necessary operations for the blurring procedure. """
        with tf.variable_scope('gaussian_blurring'):
            image = tf.placeholder(dtype=tf.float32, shape=[None, None, None, self.channels], name='x_input')
            mask = tf.placeholder(dtype=tf.float32, shape=[None, None, None, 1], name='x_input')

            # ---- mean smoothing
            if self.smooth_boxes:
                W_mean = tf.get_variable(name='mean_kernel',
                                         shape=[self.mean_kernel.shape[0], self.mean_kernel.shape[1], 1, 1],
                                         dtype=tf.float32,
                                         initializer=kernel_initializer(kernels=self.mean_kernel),
                                         trainable=False, validate_shape=True)

                smoothed_mask = tf.nn.conv2d(input=mask, filter=W_mean, strides=[1, 1, 1, 1], padding='SAME',
                                             use_cudnn_on_gpu=True, data_format='NHWC', name='smooth_mask')
            else:
                smoothed_mask = mask

            # ---- blurring the initial image
            W_blur = tf.get_variable(name='gaussian_kernels',
                                     shape=[self.kernels.shape[0], self.kernels.shape[1], self.kernels.shape[2], 1],
                                     dtype=tf.float32,
                                     initializer=kernel_initializer(kernels=self.kernels),
                                     trainable=False, validate_shape=True)

            # Use reflection padding in conjunction with convolutions without padding (no border effects)
            pad = (self.kernel_size - 1) / 2
            paddings = np.array([[0, 0], [pad, pad], [pad, pad], [0, 0]])
            img = tf.pad(image, paddings=paddings, mode='REFLECT')
            blurred_image = tf.nn.depthwise_conv2d_native(input=img, filter=W_blur, strides=[1, 1, 1, 1],
                                                          padding='VALID', data_format='NHWC', name='conv_spatial')

            # Combination of the blurred image and the original image with a bounding box mask
            anonymized_image = image * (1-smoothed_mask) + blurred_image * smoothed_mask

            # store internal variables
            self.image = image
            self.mask = mask
            self.anonymized_image = anonymized_image

    def _get_all_masks(self, bboxes, images):
        """ For a batch of boxes, returns heatmap encoded box images.

        :param bboxes: 3D np array containing a batch of box coordinates (see anonymize for more details).
        :param images: 4D np array with NHWC encoding containing a batch of images.
        :return: 4D np array in NHWC encoding. For each batch sample, there is a binary mask with one channel which
            encodes bounding box locations.
        """
        masks = np.zeros(shape=(images.shape[0], images.shape[1], images.shape[2], 1))
        image_size = (images.shape[1], images.shape[2])

        for n, boxes in enumerate(bboxes):
            masks[n, ...] = self._get_box_mask(box_array=boxes, image_size=image_size)

        return masks

    def _get_box_mask(self, box_array, image_size):
        """ For an array of boxes for a single image, return a binary mask which encodes box locations as heatmap.

        :param box_array: 2D numpy array with dimnesions: numer_bboxes x 4.
            Boxes are encoded as [x_min, y_min, x_max, y_max]
        :param image_size: tuple containing the image dimensions. This is used to create the binary mask layout.
        :return: 3D numpy array containing the binary mask (last dimension is always size 1).
        """
        # assert isinstance(box_array, np.ndarray) and len(box_array.shape) == 2
        mask = np.zeros(shape=(image_size[0], image_size[1], 1))

        # insert box masks into array
        for box in box_array:
            mask[box[1]:box[3], box[0]:box[2], :] = 1

        return mask

    def _obfuscate_numpy(self, images, bboxes):
        """ Anonymizes bounding box regions within a given region by applying gaussian blurring.

        :param images: 4D np array with NHWC encoding containing a batch of images.
            The number of channels must match self.num_channels.
        :param bboxes: 3D np array containing a batch of box coordinates. First dimension is the batch dimension.
            Second dimension are boxes within an image and third dimension are the box coordinates.
            np.array([[[10, 15, 30, 50], [500, 200, 850, 300]]]) contains one batch sample and two boxes for that
            sample. Box coordinates are in [x_min, y_min, x_max, y_max] notation.
        :return: 4D np array with NHWC encoding containing an anonymized batch of images.
        """
        # assert isinstance(images, np.ndarray) and len(images.shape) == 4
        # assert isinstance(bboxes, np.ndarray) and len(bboxes.shape) == 3 and bboxes.shape[-1] == 4
        bbox_masks = self._get_all_masks(bboxes=bboxes, images=images)

        anonymized_image = self.sess.run(fetches=self.anonymized_image,
                                         feed_dict={self.image: images, self.mask: bbox_masks})
        return anonymized_image

    def obfuscate(self, image, boxes):
        """
        Anonymize all bounding boxes in a given image.
        :param image: The image as np.ndarray with shape==(height, width, channels).
        :param boxes: A list of boxes.
        :return: The anonymized image.
        """
        if len(boxes) == 0:
            return np.copy(image)

        image_array = np.expand_dims(image, axis=0)
        box_array = []
        for box in boxes:
            x_min = int(math.floor(box.x_min))
            y_min = int(math.floor(box.y_min))
            x_max = int(math.ceil(box.x_max))
            y_max = int(math.ceil(box.y_max))
            box_array.append(np.array([x_min, y_min, x_max, y_max]))
        box_array = np.stack(box_array, axis=0)
        box_array = np.expand_dims(box_array, axis=0)

        anonymized_images = self._obfuscate_numpy(image_array, box_array)
        return anonymized_images[0]


================================================
FILE: anonymizer/utils/__init__.py
================================================
from anonymizer.utils.box import Box

__all__ = ['Box']


================================================
FILE: anonymizer/utils/box.py
================================================
class Box:
    def __init__(self, x_min, y_min, x_max, y_max, score, kind):
        self.x_min = float(x_min)
        self.y_min = float(y_min)
        self.x_max = float(x_max)
        self.y_max = float(y_max)
        self.score = float(score)
        self.kind = str(kind)

    def __repr__(self):
        return f'Box({self.x_min}, {self.y_min}, {self.x_max}, {self.y_max}, {self.score}, {self.kind})'

    def __eq__(self, other):
        if isinstance(other, Box):
            return (self.x_min == other.x_min and self.y_min == other.y_min and
                    self.x_max == other.x_max and self.y_max == other.y_max and
                    self.score == other.score and self.kind == other.kind)
        return False


================================================
FILE: pytest.ini
================================================
[pytest]
filterwarnings =
    ignore:.*inspect\.getargspec\(\) is deprecated:DeprecationWarning


================================================
FILE: requirements.txt
================================================
pytest==3.9.1
flake8==3.5.0
numpy==1.15.2
tensorflow-gpu==1.11.0
scipy==1.1.0
Pillow==5.3.0
requests==2.20.0
googledrivedownloader==0.3
tqdm==4.28.0


================================================
FILE: setup.py
================================================
#!/usr/bin/env python

from distutils.core import setup
from setuptools import find_packages

setup(name='uai-anonymizer',
      version='latest',
      packages=find_packages(exclude=['test', 'test.*']),

      install_requires=[
          'pytest>=3.9.1',
          'flake8>=3.5.0',
          'numpy>=1.15.2',
          'tensorflow-gpu>=1.11.0',
          'scipy>=1.1.0',
          'Pillow>=5.3.0',
          'requests>=2.20.0',
          'googledrivedownloader>=0.3',
          'tqdm>=4.28.0',
      ],

      dependency_links=[
      ],
      )


================================================
FILE: test/__init__.py
================================================


================================================
FILE: test/anonymization/__init__.py
================================================


================================================
FILE: test/anonymization/anonymizer_test.py
================================================
import numpy as np
from PIL import Image

from anonymizer.utils import Box
from anonymizer.anonymization import Anonymizer


def load_np_image(image_path):
    image = Image.open(image_path).convert('RGB')
    np_image = np.array(image)
    return np_image


class MockObfuscator:
    def obfuscate(self, image, boxes):
        obfuscated_image = np.copy(image)
        for box in boxes:
            obfuscated_image[int(box.y_min):int(box.y_max), int(box.x_min):int(box.x_max), :] = 0.0
        return obfuscated_image


class MockDetector:
    def __init__(self, detected_boxes):
        self.detected_boxes = detected_boxes

    def detect(self, image, detection_threshold):
        return self.detected_boxes


class TestAnonymizer:
    @staticmethod
    def test_it_anonymizes_a_single_image():
        np.random.seed(42)  # to avoid flaky tests
        input_image = np.random.rand(128, 64, 3)  # height, width, channels
        obfuscator = MockObfuscator()
        mock_detector = MockDetector([Box(y_min=0, x_min=10, y_max=20, x_max=30, score=0.5, kind=''),
                                      Box(y_min=100, x_min=10, y_max=120, x_max=30, score=0.9, kind='')])
        expected_anonymized_image = np.copy(input_image)
        expected_anonymized_image[0:20, 10:30] = 0.0
        expected_anonymized_image[100:120, 10:30] = 0.0

        anonymizer = Anonymizer(detectors={'face': mock_detector}, obfuscator=obfuscator)
        anonymized_image, detected_boxes = anonymizer.anonymize_image(input_image, detection_thresholds={'face': 0.1})

        assert np.all(np.isclose(expected_anonymized_image, anonymized_image))
        assert detected_boxes == [Box(y_min=0, x_min=10, y_max=20, x_max=30, score=0.5, kind=''),
                                  Box(y_min=100, x_min=10, y_max=120, x_max=30, score=0.9, kind='')]

    @staticmethod
    def test_it_anonymizes_multiple_images(tmp_path):
        np.random.seed(42)  # to avoid flaky tests
        input_images = [np.random.rand(128, 64, 3), np.random.rand(128, 64, 3), np.random.rand(128, 64, 3)]
        obfuscator = MockObfuscator()
        mock_detector = MockDetector([Box(y_min=0, x_min=10, y_max=20, x_max=30, score=0.5, kind=''),
                                      Box(y_min=100, x_min=10, y_max=120, x_max=30, score=0.9, kind='')])
        expected_anonymized_images = list(map(np.copy, input_images))
        for i, _ in enumerate(expected_anonymized_images):
            expected_anonymized_images[i] = (expected_anonymized_images[i] * 255).astype(np.uint8)
            expected_anonymized_images[i][0:20, 10:30] = 0
            expected_anonymized_images[i][100:120, 10:30] = 0
        # write input images to disk
        input_path = tmp_path / 'input'
        input_path.mkdir()
        output_path = tmp_path / 'output'
        for i, input_image in enumerate(input_images):
            image_path = input_path / f'{i}.png'
            pil_image = Image.fromarray((input_image * 255).astype(np.uint8), mode='RGB')
            pil_image.save(image_path)

        anonymizer = Anonymizer(detectors={'face': mock_detector}, obfuscator=obfuscator)
        anonymizer.anonymize_images(str(input_path), output_path=str(output_path), detection_thresholds={'face': 0.1},
                                    file_types=['jpg', 'png'], write_json=False)

        anonymized_images = []
        for image_path in sorted(output_path.glob('**/*.png')):
            anonymized_images.append(load_np_image(image_path))

        for i, expected_anonymized_image in enumerate(expected_anonymized_images):
            assert np.all(np.isclose(expected_anonymized_image, anonymized_images[i]))


================================================
FILE: test/detection/__init__.py
================================================


================================================
FILE: test/detection/detector_test.py
================================================
import numpy as np
from PIL import Image

from anonymizer.utils import Box
from anonymizer.detection import Detector
from anonymizer.detection import download_weights, get_weights_path


def box_covers_box(covering_box: Box, covered_box: Box):
    return (covered_box.x_min > covering_box.x_min and covered_box.y_min > covering_box.y_min and
            covered_box.x_max < covering_box.x_max and covered_box.y_max < covering_box.y_max)


def load_np_image(image_path):
    image = Image.open(image_path).convert('RGB')
    np_image = np.array(image)
    return np_image


class TestDetector:
    @staticmethod
    def test_it_detects_obvious_faces(tmp_path):
        weights_directory = tmp_path / 'weights'
        face_weights_path = get_weights_path(weights_directory, kind='face')
        download_weights(weights_directory)

        detector = Detector(kind='face', weights_path=face_weights_path)
        np_image = load_np_image('./test/detection/face_test_image.jpg')

        left_face = Box(x_min=267, y_min=64, x_max=311, y_max=184, score=0.0, kind='face')
        right_face = Box(x_min=369, y_min=68, x_max=420, y_max=152, score=0.0, kind='face')

        boxes = detector.detect(np_image, detection_threshold=0.2)

        assert len(boxes) >= 2
        for box in boxes:
            assert box.score >= 0.2
        assert boxes[0].score >= 0.5 and boxes[1].score >= 0.5
        assert ((box_covers_box(boxes[0], left_face) and box_covers_box(boxes[1], right_face)) or
                (box_covers_box(boxes[1], left_face) and box_covers_box(boxes[0], right_face)))


================================================
FILE: test/detection/weights_test.py
================================================
from anonymizer.detection import download_weights


class TestDownloadWeights:
    @staticmethod
    def test_it_downloads_weights(tmp_path):
        weights_directory = tmp_path / 'weights'
        assert len(list(weights_directory.glob('**/*.pb'))) == 0

        download_weights(download_directory=weights_directory, version='1.0.0')

        assert len(list(weights_directory.glob('**/*.pb'))) == 2
        assert (weights_directory / 'weights_face_v1.0.0.pb').is_file()
        assert (weights_directory / 'weights_plate_v1.0.0.pb').is_file()
        assert not (weights_directory / 'nonexistent_path.pb').is_file()


================================================
FILE: test/obfuscation/__init__.py
================================================


================================================
FILE: test/obfuscation/obfuscator_test.py
================================================
import numpy as np

from anonymizer.obfuscation import Obfuscator
from anonymizer.utils import Box


class TestObfuscator:
    @staticmethod
    def test_it_obfuscates_regions():
        obfuscator = Obfuscator()
        np.random.seed(42)  # to avoid flaky tests
        image = np.random.rand(128, 64, 3)  # height, width, channels
        boxes = [Box(y_min=0, x_min=10, y_max=20, x_max=30, score=0, kind=''),
                 Box(y_min=100, x_min=10, y_max=120, x_max=30, score=0, kind='')]

        # copy to make sure the input image does not change
        obfuscated_image = obfuscator.obfuscate(np.copy(image), boxes)

        assert obfuscated_image.shape == (128, 64, 3)
        assert not np.any(np.isclose(obfuscated_image[0:20, 10:30, :], image[0:20, 10:30, :]))
        assert not np.any(np.isclose(obfuscated_image[100:120, 10:30, :], image[100:120, 10:30, :]))
        assert np.all(np.isclose(obfuscated_image[30:90, :, :], image[30:90, :, :]))


================================================
FILE: test/utils/__init__.py
================================================


================================================
FILE: test/utils/box_test.py
================================================
from anonymizer.utils import Box


class TestBox:
    @staticmethod
    def test_it_has_coordinates_a_score_and_a_kind():
        box = Box(x_min=1.0, y_min=2.0, x_max=3.0, y_max=4.0, score=0.9, kind='face')

        assert box.x_min == 1.0
        assert box.y_min == 2.0
        assert box.x_max == 3.0
        assert box.y_max == 4.0
        assert box.score == 0.9
        assert box.kind == 'face'
Download .txt
gitextract_8otnw0t9/

├── .flake8
├── .gitignore
├── LICENSE
├── README.md
├── anonymizer/
│   ├── __init__.py
│   ├── anonymization/
│   │   ├── __init__.py
│   │   └── anonymizer.py
│   ├── bin/
│   │   ├── __init__.py
│   │   └── anonymize.py
│   ├── detection/
│   │   ├── __init__.py
│   │   ├── detector.py
│   │   └── weights.py
│   ├── obfuscation/
│   │   ├── __init__.py
│   │   ├── helpers.py
│   │   └── obfuscator.py
│   └── utils/
│       ├── __init__.py
│       └── box.py
├── pytest.ini
├── requirements.txt
├── setup.py
└── test/
    ├── __init__.py
    ├── anonymization/
    │   ├── __init__.py
    │   └── anonymizer_test.py
    ├── detection/
    │   ├── __init__.py
    │   ├── detector_test.py
    │   └── weights_test.py
    ├── obfuscation/
    │   ├── __init__.py
    │   └── obfuscator_test.py
    └── utils/
        ├── __init__.py
        └── box_test.py
Download .txt
SYMBOL INDEX (50 symbols across 12 files)

FILE: anonymizer/anonymization/anonymizer.py
  function load_np_image (line 9) | def load_np_image(image_path):
  function save_np_image (line 15) | def save_np_image(image, image_path):
  function save_detections (line 20) | def save_detections(detections, detections_path):
  class Anonymizer (line 35) | class Anonymizer:
    method __init__ (line 36) | def __init__(self, detectors, obfuscator):
    method anonymize_image (line 40) | def anonymize_image(self, image, detection_thresholds):
    method anonymize_images (line 49) | def anonymize_images(self, input_path, output_path, detection_threshol...

FILE: anonymizer/bin/anonymize.py
  function parse_args (line 25) | def parse_args():
  function main (line 79) | def main(input_path, image_output_path, weights_path, image_extensions, ...

FILE: anonymizer/detection/detector.py
  class Detector (line 7) | class Detector:
    method __init__ (line 8) | def __init__(self, kind, weights_path):
    method _convert_boxes (line 22) | def _convert_boxes(self, num_boxes, scores, boxes, image_height, image...
    method detect (line 38) | def detect(self, image, detection_threshold):

FILE: anonymizer/detection/weights.py
  function get_weights_path (line 14) | def get_weights_path(base_path, kind, version='1.0.0'):
  function _download_single_model_weights (line 21) | def _download_single_model_weights(download_directory, kind, version):
  function download_weights (line 31) | def download_weights(download_directory, version='1.0.0'):

FILE: anonymizer/obfuscation/helpers.py
  function kernel_initializer (line 5) | def kernel_initializer(kernels):
  function bilinear_filter (line 40) | def bilinear_filter(filter_size=(4, 4)):
  function get_default_session_config (line 69) | def get_default_session_config(memory_fraction=0.9):

FILE: anonymizer/obfuscation/obfuscator.py
  class Obfuscator (line 10) | class Obfuscator:
    method __init__ (line 12) | def __init__(self, kernel_size=21, sigma=2, channels=3, box_kernel_siz...
    method _gaussian_kernel (line 51) | def _gaussian_kernel(self, kernel_size=30, sigma=5):
    method _build_graph (line 67) | def _build_graph(self):
    method _get_all_masks (line 108) | def _get_all_masks(self, bboxes, images):
    method _get_box_mask (line 124) | def _get_box_mask(self, box_array, image_size):
    method _obfuscate_numpy (line 141) | def _obfuscate_numpy(self, images, bboxes):
    method obfuscate (line 160) | def obfuscate(self, image, boxes):

FILE: anonymizer/utils/box.py
  class Box (line 1) | class Box:
    method __init__ (line 2) | def __init__(self, x_min, y_min, x_max, y_max, score, kind):
    method __repr__ (line 10) | def __repr__(self):
    method __eq__ (line 13) | def __eq__(self, other):

FILE: test/anonymization/anonymizer_test.py
  function load_np_image (line 8) | def load_np_image(image_path):
  class MockObfuscator (line 14) | class MockObfuscator:
    method obfuscate (line 15) | def obfuscate(self, image, boxes):
  class MockDetector (line 22) | class MockDetector:
    method __init__ (line 23) | def __init__(self, detected_boxes):
    method detect (line 26) | def detect(self, image, detection_threshold):
  class TestAnonymizer (line 30) | class TestAnonymizer:
    method test_it_anonymizes_a_single_image (line 32) | def test_it_anonymizes_a_single_image():
    method test_it_anonymizes_multiple_images (line 50) | def test_it_anonymizes_multiple_images(tmp_path):

FILE: test/detection/detector_test.py
  function box_covers_box (line 9) | def box_covers_box(covering_box: Box, covered_box: Box):
  function load_np_image (line 14) | def load_np_image(image_path):
  class TestDetector (line 20) | class TestDetector:
    method test_it_detects_obvious_faces (line 22) | def test_it_detects_obvious_faces(tmp_path):

FILE: test/detection/weights_test.py
  class TestDownloadWeights (line 4) | class TestDownloadWeights:
    method test_it_downloads_weights (line 6) | def test_it_downloads_weights(tmp_path):

FILE: test/obfuscation/obfuscator_test.py
  class TestObfuscator (line 7) | class TestObfuscator:
    method test_it_obfuscates_regions (line 9) | def test_it_obfuscates_regions():

FILE: test/utils/box_test.py
  class TestBox (line 4) | class TestBox:
    method test_it_has_coordinates_a_score_and_a_kind (line 6) | def test_it_has_coordinates_a_score_and_a_kind():
Condensed preview — 30 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (51K chars).
[
  {
    "path": ".flake8",
    "chars": 31,
    "preview": "[flake8]\nmax-line-length = 119\n"
  },
  {
    "path": ".gitignore",
    "chars": 20,
    "preview": "__pycache__\n/weights"
  },
  {
    "path": "LICENSE",
    "chars": 10173,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 5213,
    "preview": "___\n\n⚠️  **ARCHIVED REPOSITORY** ⚠️   \n\n**We decided to archive this repository to make it read-only and indicate that i"
  },
  {
    "path": "anonymizer/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "anonymizer/anonymization/__init__.py",
    "chars": 85,
    "preview": "from anonymizer.anonymization.anonymizer import Anonymizer\n\n__all__ = ['Anonymizer']\n"
  },
  {
    "path": "anonymizer/anonymization/anonymizer.py",
    "chars": 2749,
    "preview": "import json\nfrom pathlib import Path\n\nimport numpy as np\nfrom PIL import Image\nfrom tqdm import tqdm\n\n\ndef load_np_image"
  },
  {
    "path": "anonymizer/bin/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "anonymizer/bin/anonymize.py",
    "chars": 5697,
    "preview": "\"\"\"\nCopyright 2018 understand.ai\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this f"
  },
  {
    "path": "anonymizer/detection/__init__.py",
    "chars": 191,
    "preview": "from anonymizer.detection.detector import Detector\nfrom anonymizer.detection.weights import download_weights, get_weight"
  },
  {
    "path": "anonymizer/detection/detector.py",
    "chars": 2479,
    "preview": "import numpy as np\nimport tensorflow as tf\n\nfrom anonymizer.utils import Box\n\n\nclass Detector:\n    def __init__(self, ki"
  },
  {
    "path": "anonymizer/detection/weights.py",
    "chars": 1168,
    "preview": "from pathlib import Path\n\nfrom google_drive_downloader import GoogleDriveDownloader as gdd\n\n\nWEIGHTS_GDRIVE_IDS = {\n    "
  },
  {
    "path": "anonymizer/obfuscation/__init__.py",
    "chars": 83,
    "preview": "from anonymizer.obfuscation.obfuscator import Obfuscator\n\n__all__ = ['Obfuscator']\n"
  },
  {
    "path": "anonymizer/obfuscation/helpers.py",
    "chars": 2665,
    "preview": "import numpy as np\nimport tensorflow as tf\n\n\ndef kernel_initializer(kernels):\n    \"\"\" Wrapper for an initializer of conv"
  },
  {
    "path": "anonymizer/obfuscation/obfuscator.py",
    "chars": 9246,
    "preview": "import math\n\nimport numpy as np\nimport scipy.stats as st\nimport tensorflow as tf\n\nfrom anonymizer.obfuscation.helpers im"
  },
  {
    "path": "anonymizer/utils/__init__.py",
    "chars": 56,
    "preview": "from anonymizer.utils.box import Box\n\n__all__ = ['Box']\n"
  },
  {
    "path": "anonymizer/utils/box.py",
    "chars": 727,
    "preview": "class Box:\n    def __init__(self, x_min, y_min, x_max, y_max, score, kind):\n        self.x_min = float(x_min)\n        se"
  },
  {
    "path": "pytest.ini",
    "chars": 96,
    "preview": "[pytest]\nfilterwarnings =\n    ignore:.*inspect\\.getargspec\\(\\) is deprecated:DeprecationWarning\n"
  },
  {
    "path": "requirements.txt",
    "chars": 149,
    "preview": "pytest==3.9.1\nflake8==3.5.0\nnumpy==1.15.2\ntensorflow-gpu==1.11.0\nscipy==1.1.0\nPillow==5.3.0\nrequests==2.20.0\ngoogledrive"
  },
  {
    "path": "setup.py",
    "chars": 549,
    "preview": "#!/usr/bin/env python\n\nfrom distutils.core import setup\nfrom setuptools import find_packages\n\nsetup(name='uai-anonymizer"
  },
  {
    "path": "test/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "test/anonymization/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "test/anonymization/anonymizer_test.py",
    "chars": 3656,
    "preview": "import numpy as np\nfrom PIL import Image\n\nfrom anonymizer.utils import Box\nfrom anonymizer.anonymization import Anonymiz"
  },
  {
    "path": "test/detection/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "test/detection/detector_test.py",
    "chars": 1580,
    "preview": "import numpy as np\nfrom PIL import Image\n\nfrom anonymizer.utils import Box\nfrom anonymizer.detection import Detector\nfro"
  },
  {
    "path": "test/detection/weights_test.py",
    "chars": 621,
    "preview": "from anonymizer.detection import download_weights\n\n\nclass TestDownloadWeights:\n    @staticmethod\n    def test_it_downloa"
  },
  {
    "path": "test/obfuscation/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "test/obfuscation/obfuscator_test.py",
    "chars": 963,
    "preview": "import numpy as np\n\nfrom anonymizer.obfuscation import Obfuscator\nfrom anonymizer.utils import Box\n\n\nclass TestObfuscato"
  },
  {
    "path": "test/utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "test/utils/box_test.py",
    "chars": 403,
    "preview": "from anonymizer.utils import Box\n\n\nclass TestBox:\n    @staticmethod\n    def test_it_has_coordinates_a_score_and_a_kind()"
  }
]

About this extraction

This page contains the full source code of the understand-ai/anonymizer GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 30 files (47.5 KB), approximately 11.9k tokens, and a symbol index with 50 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!