master 9e8702a20d70 cached
33 files
81.8 KB
19.7k tokens
67 symbols
1 requests
Download .txt
Repository: BMW-InnovationLab/BMW-Anonymization-API
Branch: master
Commit: 9e8702a20d70
Files: 33
Total size: 81.8 KB

Directory structure:
gitextract_hvd0l8rm/

├── .gitignore
├── LICENSE
├── README.md
├── docker/
│   ├── dockerfile
│   └── requirements.txt
├── docker-compose.yml
├── docker-compose_tf_gluoncv.yml
├── docker_compose_readme.md
├── jsonFiles/
│   ├── url_configuration.json
│   └── user_configuration.json
├── references/
│   └── techniques.md
├── src/
│   └── main/
│       ├── APIClient.py
│       ├── ConfigurationSchema.json
│       ├── __init__.py
│       ├── anonymization/
│       │   ├── __init__.py
│       │   ├── base_anonymization.py
│       │   ├── detection_anonymization.py
│       │   └── segmentation_anonymization.py
│       ├── anonymization_service.py
│       ├── anonymized_video/
│       │   └── .gitignore
│       ├── config.py
│       ├── exceptions.py
│       ├── helpers.py
│       ├── labels.py
│       ├── models.py
│       ├── start.py
│       ├── strategy_context.py
│       ├── supported_methods/
│       │   ├── __init__.py
│       │   └── common_labels.py
│       └── urlConfigurationSchema
├── testing_script/
│   ├── test.py
│   └── user_configuration.json
└── url_for_openvino_compose/
    └── url_configuration.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

.idea

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# BMW-Anonymization-Api

Data privacy and individuals’ anonymity are and always have been a major concern for data-driven companies. 

Therefore, we designed and implemented an anonymization API that localizes and obfuscates (i.e. hides) sensitive information in images/videos in order to preserve the individuals' anonymity. The main features of our anonymization tool are the following:
* **Agnostic in terms of localization techniques**: our API currently supports [Semantic segmentation](https://github.com/BMW-InnovationLab/BMW-Semantic-Segmentation-Inference-API-GPU-CPU) or [Object Detection](https://github.com/BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU).
* **Modular in terms of sensitive information**: the user can train a Deep Learning (DL) model for [object detection](https://github.com/BMW-InnovationLab/BMW-TensorFlow-Training-GUI) and [semantic segmentation](https://github.com/BMW-InnovationLab/BMW-Semantic-Segmentation-Training-GUI) to localize the sensitive information she/he wishes to protect, e.g., individual's face or body, personal belongings, vehicles...
* **Scalable in terms of anonymization techniques**: our API currently supports pixelating, blurring, blackening (masking). Also, additinal anonymization techniques can be configured as stated below. For the highest level of privacy, we recommend using the blackening technique with degree 1.
* **Supports DL-based models optimized via the [Intel® OpenVINO™ toolkit v2021.1](https://docs.openvinotoolkit.org/latest/index.html) for CPU usage**: DL-based models optimized and deployed via the [Openvino Segmentation Inference API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Segmentation-Inference-API) and the [Openvino Detection Inference API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Inference-API) can also be used.
* **Compatible with the BMW Deep Learning tools**: DL models trained via our [training](https://github.com/BMW-InnovationLab/BMW-TensorFlow-Training-GUI) and deployed via our [inference](https://github.com/BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU) APIs are compatible with this anonymization API. 

<p align="center">
  <img src="references/output_7.gif" alt="animated" />
</p>


## General Architecture & Deployment Mode:

Our anonymization API receives an image along with a JSON object through which the user specifies mainly: 
* The sensitive information she/he wishes to obfuscate.
* The anonymization technique.
* The anonymization degree.
* The localization technique. 

![](references/architecture_2.png) 

You can deploy the anonymization API either:
* As a standalone docker container which can be connected to other inference APIs ([object detection](https://github.com/BMW-InnovationLab/BMW-YOLOv4-Inference-API-CPU) or [semantic segmentation](https://github.com/BMW-InnovationLab/BMW-Semantic-Segmentation-Inference-API-GPU-CPU)) deployed within a standalone docker container as well.
* As a network of docker containers along with other inference APIs running on the same machine via docker-compose. (please check the [following link](./docker_compose_readme.md) for the docker-compose deployment).


## Prerequisites:
- docker
- docker-compose

### Check for prerequisites

#### To check if docker-ce is installed:
```sh
docker --version
```

#### To check if docker-compose is installed:
```sh
docker-compose --version
```

### Install prerequisites

#### Ubuntu

To install [Docker](https://docs.docker.com/engine/install/ubuntu/) and [Docker Compose](https://docs.docker.com/compose/install/) on Ubuntu, please follow the link.

#### Windows 10

To [install Docker on Windows](https://docs.docker.com/docker-for-windows/install/), please follow the link.

**P.S: For Windows users, open the Docker Desktop menu by clicking the Docker Icon in the Notifications area. Select Settings, and then Advanced tab to adjust the resources available to Docker Engine.**

## Build The Docker Image

As mentioned before, this container can be deployed using either **docker** or **docker-compose**.

* If you wish to deploy this API using **docker-compose**, please refer to [following link](./docker_compose_readme.md). After deploying the API with docker compose, please consider returning to this documentation for further information about the API Endpoints and use configuration file sample sections.

* If you wish to deploy this API using **docker**, please continue with the following docker build and run commands.

In order to build the project run the following command from the project's root directory:

```sh
 docker build -t anonymization_api -f docker/dockerfile .
```
#### Build behind a proxy
In order to build the image behind a proxy use the following command in the project's root directory:
```sh
docker build --build-arg http_proxy='your_proxy' --build-arg https_proxy='your_proxy' -t anonymization_api -f ./docker/dockerfile .
```

## Run the docker container

To run the API, go to the API's directory and run the following:

#### Using Linux based docker:

```sh
sudo docker run -itv $(pwd)/src/main:/main -v $(pwd)/jsonFiles:/jsonFiles -p <port_of_your_choice>:4343 anonymization_api
```
##### Behind a proxy:
```sh
sudo docker run -itv $(pwd)/src/main:/main -v $(pwd)/jsonFiles:/jsonFiles  --env HTTP_PROXY="" --env HTTPS_PROXY="" --env http_proxy="" --env https_proxy="" -p 5555:4343 anonymization_api
```

#### Using Windows based docker:

```sh
docker run -itv ${PWD}/src/main:/main -v ${PWD}/jsonFiles:/jsonFiles -p <port_of_your_choice>:4343 anonymization_api
```

The API file will be run automatically, and the service will listen to http requests on the chosen port.

## API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

```
http://<machine_IP>:<docker_host_port>/docs
```

### Endpoints summary
![](references/endpoints.png) 

#### Configuration 


##### /set_url (POST)

Set the URL of the inference API that you wish to connect to the Anonymization API. If the specified URL is unreachable due to connection problems, it will not be added to the [JSON url_configuration file](https://github.com/BMW-InnovationLab/BMW-Anonymization-API/blob/master/jsonFiles/url_configuration.json). The URL should be specified in the following format  "http://ip:port/".

##### /list_urls (GET)

Returns the URLs of the inference APIs that were already configured via the /set_url POST request.

##### /remove_url (POST)

Removes the specified URL from the [JSON url_configuration file](https://github.com/BMW-InnovationLab/BMW-Anonymization-API/blob/master/jsonFiles/url_configuration.json)

##### /remove_all_urls (POST)

Removes all available urls from the [JSON url_configuration file](https://github.com/BMW-InnovationLab/BMW-Anonymization-API/blob/master/jsonFiles/url_configuration.json)

##### /available_methods/ (GET)

After setting the inference URLs via the /set_url request, the user can view the Anonymization API's configuration by issuing the /available_methods request. Mainly the user can view (i) the supported sensitive information (label_names) , (ii) the supported localization techniques, (iii) the inference URLs and (iv) the DL model name that are configured in the deployed anonymization API as seen below.




<img alt="" src="./references/available_methods.gif?raw=" width="800" >




#### Anonymization 
##### /anonymize/ (POST)

Anonymizes the input image based on the [user's JSON configuration file](https://github.com/BMW-InnovationLab/BMW-Anonymization-API/blob/master/jsonFiles/user_configuration.json)



<img alt="" src="./references/anonymize.gif?raw=" width="800" >




##### /anonymize_video/ (POST)

Anonymizes a video based on the user's sensitive info and save the anonymized video in `src/main/anonymized_videos` under <original_video_name>_TIMESTAMP.mp4



<img src="./references/anonymize_video.gif?raw=" width="800" >


#### Video Anonymization Time
The video might take a while, actually you can estimate the time that it may take by using the following formula:
**Video_Anonymization_Time = Video_Length x Number_Of_Frames_Per_Second  x Anonymization_Time_Of_Each_Frame**


## User configuration file sample

In order to anonymize an image, the user should specify the different details in the [user's JSON configuration file](https://github.com/BMW-InnovationLab/BMW-Anonymization-API/blob/master/jsonFiles/user_configuration.json)

Please check a sample in the below image:

![](references/json_file.PNG)

Note that the URL field is an optional field that you can add in case you wanted to use a specific URL of a running API. You can just add the URL as an optional field in this file as shown in the first sensitive info. In case this field is not specified, the URL defined in the url_configuration.json file will be used by default if it matches all the requirements. 

## To add a new technique to the API:
Please refer to the following link [add new technique documentation](references/techniques.md) for more information on how to add a new anonymization technique to the APIs with common and custom labels.

## Benchmark

### Object Detection

|**GPU**|**Network**  |**Width**  |**Height**  |**Inference Time (s)**  |**Anonymization Time (s)** |**Total Time (s)** |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Titan RTX  |  yolov4 |  640 | 768| 0.2  |0.07  |0.27 |
|Titan RTX  |  yolov4 |  1024 | 768| 0.4  |0.14  |0.54 |
|Titan RTX  |  yolov4 |  2048 | 1024| 1.2  |0.6  |1.8 |
|Titan RTX  |  yolov4 |  3840 | 2160| 4.8  |0.6  |5.4 |



### Object Detection with OpenVINO model and Intel Core i7-1185G7 

The model was trained with the TensorFlow Object Detection API (TF version 1.14) and then converted to OpenVINO IR using [Intel&reg; OpenVINO&trade; toolkit v2021.4](https://docs.openvinotoolkit.org/latest/index.html) </br>
<span style="font-size:2em;">Results may vary. For workloads and configurations visit: www.intel.com/PerformanceIndex and Legal Information. </span>

|**CPU**|**Network**  |**Precision** |**Width**  |**Height**  |**Inference Time (s)**  |**Anonymization Time (s)** |**Total Time (s)** <br/> for Avg, Max, Min|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Intel Core <br/> i7-1185G7  |  Faster R-CNN  <br/> Input Shape: [3,600,600]  |  FP32 | 1024| 768  |0.51  |0.09  |0.60, 0.67, 0.54  |
|Intel Core <br/> i7-1185G7  |  Faster R-CNN  <br/> Input Shape: [3,600,600] |  FP32  | 2048 | 1536  |0.56  |0.24  |0.80,  0.97, 0.70  |
|Intel Core <br/> i7-1185G7  |  Faster R-CNN  <br/> Input Shape: [3,600,600]  |  INT8 | 1024| 768  |0.16  |0.09  |0.25, 0.27, 0.22  |
|Intel Core <br/> i7-1185G7  |  Faster R-CNN  <br/> Input Shape: [3,600,600] |  INT8| 2048 | 1536  |0.19  |0.24  |0.43,  0.56, 0.36  |


### Semantic Segmentation

|**GPU**|**Network**  |**Width**  |**Height**  |**Inference Time (s)**  |**Anonymization Time (s)** |**Total Time (s)** |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Titan RTX  |  psp resnet 101 |  640 | 768| 0.2  |0.8 |1.1 |
|Titan RTX  |  psp resnet 101 |  1024 | 768| 0.3  |0.8  |1.1 |
|Titan RTX  |  psp resnet 101 |  2048 | 1024| 0.9  |1.0  |1.9 |
|Titan RTX  |  psp resnet 101 |  3840 | 2160| 2.0  |3.0  |5.0 |



## Possible Error

- You may encounter the below error when running the docker container at startup in standalone version or docker-compose version ![url_error](references/url_error.png) 
- In case you do, please make sure that the URL of the inference APIs listed in the  `jsonFiles/url_configuration.json` are still recheable. A possible solution would be to empty `jsonFiles/url_configuration.json` as seen below before starting the container:

    ```
    {
    "urls": [
    ]
    }
    ```

## Citing

If you use this repository in your research, consider citing it using the following Bibtex entries:
```
@inproceedings{Tekli2021DesigningAE,
  title={Designing and evaluating anonymization techniques for images and relational data streams via Machine Learning approaches at BMW Group. (Conception et {\'e}valuation de techniques d'anonymisation des images et des flux de donn{\'e}es relationnels via des approches d'apprentissage automatique {\`a} BMW Group)},
  author={Jimmy Tekli},
  year={2021},
  url={https://api.semanticscholar.org/CorpusID:266756928}
}
```
and 
```
@misc{bmwanotool,
  author = {BMW TechOffice MUNICH},
  title = {BMW Anonymization Tool},
  howpublished = {\url{https://github.com/BMW-InnovationLab/BMW-Anonymization-API}},
  year = {2019},
}
```



## Acknowledgments

Ghenwa Aoun

Antoine Charbel, [inmind.ai](https://inmind.ai/), Beirut, Lebanon

Roy Anwar

Fady Dib

Jimmy Tekli, BMW Innovation Lab, Munich, Germany

[OpenVINO Toolkit](https://github.com/openvinotoolkit)

[intel.com](https://intel.com)

[robotron.de](https://www.robotron.de)


================================================
FILE: docker/dockerfile
================================================
FROM python:3.7

COPY docker/requirements.txt .
COPY src/main /main

RUN apt-get update && apt-get install -y ffmpeg \
    libsm6 \
    libxext6

RUN python -m pip install --upgrade pip 
RUN pip install -r requirements.txt

WORKDIR /main

CMD ["uvicorn", "start:app", "--host", "0.0.0.0", "--port", "4343"]


================================================
FILE: docker/requirements.txt
================================================
moviepy==1.0.3
aiofiles==0.8.0
fastapi==0.70.1
opencv-python==4.5.4.60
jsonschema==3.2.0
numpy==1.21.0
python-multipart==0.0.5
uvicorn==0.16.0
Pillow==9.0.0
requests==2.26.0


================================================
FILE: docker-compose.yml
================================================
version: '3'
services:

  openvino_detection_api:
    build: 
      context: ../BMW-IntelOpenVINO-Detection-Inference-API
      dockerfile: Dockerfile
    image: openvino_detection.api:latest
    networks:
      - anonym-net
    ports:
      - "8081:80"
    volumes:
      - "../BMW-IntelOpenVINO-Detection-Inference-API/models:/models"
      - "../BMW-IntelOpenVINO-Detection-Inference-API/models_hash:/models_hash"

  openvino_segmentation_api:
    build: 
      context: ../BMW-IntelOpenVINO-Segmentation-Inference-API
      dockerfile: docker/Dockerfile
    image: openvino_segmentation.api:latest
    networks:
      - anonym-net
    ports:
      - "8090:80"
    volumes:
      - "../BMW-IntelOpenVINO-Segmentation-Inference-API/models:/models"
      - "../BMW-IntelOpenVINO-Segmentation-Inference-API/models_hash:/models_hash"
      
  anonymization_api:
    image: anonymize.api:latest
    build: 
      context: .
      dockerfile: docker/dockerfile
    networks:
      - anonym-net
    ports:
      - "8070:4343"
    volumes:
      - "./jsonFiles:/jsonFiles"
      - "./src/main/anonymized_video/:/main/anonymized_video"
    depends_on:
      - openvino_detection_api
      - openvino_segmentation_api

networks: 
  anonym-net:


================================================
FILE: docker-compose_tf_gluoncv.yml
================================================
version: "2.3"
services:
  detection_api:
    image: tensorflow_inference_api_cpu:latest
    build:
      context: ../BMW-TensorFlow-Inference-API-CPU
      dockerfile: docker/dockerfile
    volumes:
      - ../BMW-TensorFlow-Inference-API-CPU/models:/models
      - ../BMW-TensorFlow-Inference-API-CPU/models_hash:/models_hash
    ports:
      - "9998:4343"
      
  segmentation_api:
    image: gluoncv_segmentation_inference_api_cpu:latest
    build:
      context: ../BMW-Semantic-Segmentation-Inference-API-GPU-CPU
      dockerfile: CPU/dockerfile
     
    volumes:
      - ../BMW-Semantic-Segmentation-Inference-API-GPU-CPU/models:/models
      - ../BMW-Semantic-Segmentation-Inference-API-GPU-CPU/models_hash:/models_hash
    runtime: nvidia
    ports:
      - "9999:4343"
    environment:
      - NVIDIA_VISIBLE_DEVICES=1
      
  anonymization:
    image: anonymization_api:latest 
    build:
      context: .
      dockerfile: docker/dockerfile
    volumes:
      - ./jsonFiles:/jsonFiles
      - ./src/main/anonymized_video:/main/anonymized_video
    ports:
      - "9997:4343"
    depends_on:
      - detection_api
      - segmentation_api


================================================
FILE: docker_compose_readme.md
================================================
# Deploying the BMW-Anonymization-Api with docker compose

In this section, docker compose will build and run a network of containers including the Anonymization API alongside multiple inference APIs.

In the following section, we encapsulate the [BMW-IntelOpenVINO-Inference-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Inference-API) and the [BMW-IntelOpenVINO-Segmentation-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Segmentation-Inference-API) with our anonymization API. 

These two inference APIs contain example models optimzed via OpenVINO. Other OpenVINO models in Intermediate Representation(IR) format, converted via the [Intel&reg; OpenVINO&trade; toolkit v2021.1](https://docs.openvinotoolkit.org/latest/index.html), can be deployed with our APIs. Currently, OpenVINO supports conversion for DL-based models trained via several Machine Learning frameworks including Caffe, Tensorflow etc. Please refer to [the OpenVINO documentation](https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html) for further details on converting your Model.


## Build and Run the network

In this section, docker compose will build and run a network of containers including the Anonymization API alongside the OpenVINO inference APIs for detection and segmentation. The instructions are provided below. 

To run the APIs together, clone the [BMW-Anonymization-API](https://github.com/BMW-InnovationLab/BMW-Anonymization-API), the [BMW-IntelOpenVINO-Inference-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Inference-API) and the [BMW-IntelOpenVINO-Segmentation-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Segmentation-Inference-API) into the same directory.

The folder structure should be similar to as shown below:

```shell
│──BMW-Anonymization-API
  │──docker 
  |──jsonFiles  
  │──...
  |──docker-compose.yml  
  │──Readme.md  
│──BMW-IntelOpenVINO-Segmentation-API 
  │──docker 
  |──...
  │──docs  
  │──Readme.md
│──BMW-IntelOpenVINO-Detection-Inference-API
  │──docker 
  |──...
  │──docs  
  │──Readme.md
  
```

In the BMW-Anonymization API replace the `./BMW-Anonymization-API/jsonFiles/url_configuration.json` with the provided `./url_for_openvino_compose/url_configuration.json`.

Three services are configured in the `docker-compose.yml` file in this repository: the [BMW-Anonymization-API](https://github.com/BMW-InnovationLab/BMW-Anonymization-API), the [BMW-IntelOpenVINO-Inference-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Inference-API) and the [BMW-IntelOpenVINO-Segmentation-API](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Segmentation-Inference-API). You can modify the build context to specify the base directories of the APIs (ensure the correct path is also given for the mounted volumes). You can also modify the host ports you wish to use for the APIs. 

After you configure your docker-compose.yml file, you can run the following command in the anonymization API directory:

### Build the images
To build the images, run the following command in this directory:
```sh
docker-compose build
```

### Run the network
To run the network, use the following command in this directory:
```sh
docker-compose up
```

### Stop the running containers
To stop the network, run the following command in this directory:
```sh
docker-compose down
```

### Restart the network
To restart the network, run the following command in this directory:
```sh
docker-compose restart
```

## API Endpoints

To see all available endpoints, open your favorite browser and navigate to:
```
http://<machine_IP>:<docker_host_port>/docs
```
If you use the standard configuration of the `docker-compose.yml` the folllowing endpoints are available:
| API | Endpoint |
| ------------------ | ------------------ |
| BMW-Anonymization-API | http://localhost:8070/docs |
| RCV-IntelOpenVINO-Detection-API | http://localhost:8081/docs |
| BMW-IntelOpenVINO-Segmentation-API | http://localhost:8090/docs |

**Please refer to the Endpoints Summary section in the [initial readme](https://github.com/BMW-InnovationLab/Anonymization_API/tree/priority-3)**

## Using other inference APIs

Other inference APIs can also be configured within the docker-compose.yml such as our [tensorflow CPU detection API](https://github.com/BMW-InnovationLab/BMW-TensorFlow-Inference-API-CPU) and [semantic segmentation CPU/GPU](https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Segmentation-Inference-API))
If you wish to deploy other inference APIs, please make sure to the docker-compose.yml accordingly:
- Modify the context in order to specify the base directory of each API
- Modify the dockerfile entry to match the path of the dockerfile in the API directory 
- Modify the ports and choose the ones you wish to use for each API
- In case you are setting up a GPU-based inference API, do not forget to set the runtime entry as "nvidia" 

We provided a sample docker-compose file  `./BMW-Anonymization-API/docker-compose_tf_gluoncv.yml`



================================================
FILE: jsonFiles/url_configuration.json
================================================
{
  "urls": [
  ]
}


================================================
FILE: jsonFiles/user_configuration.json
================================================
{
  "sensitive_info": [
    {
      "model_name": "sample_model",
      "class_name": "person",
      "anonymization_technique": "blackening",
      "inference_type": "segmentation",
      "anonymization_degree": 1
    }
  ]
}


================================================
FILE: references/techniques.md
================================================
# Add a new technique to the API

It is mandatory that the techniques you are adding are actually implemented
These are the steps that should be applied so that the anonymization technique you are adding can be applicable:
- Go to "src/main/anonymization/base_anonymization.py"
- Add the signature of the method similarly to what is already implemented (name + parameters); this method will be overridden in the other files.
- Now this method should be implemented in the files that are specified in the "/src/main/anonymization" directory (except base_anonymization.py)
  These files, for example detection_anonymization.py and segmentation_anonymization.py consist of two different classes both extending the BaseAnonymization class.

## Types of labels

This API contains two types of labels: 

* Common labels
- Special labels

## Adding common labels

Common labels are the ones that support common techniques (techniques that can be applied to all labels such as blurring, pixelating and blackening in our case)
Special labels are the ones that support, in addition to the common labels techniques, a specific technique that should be specified.
Based on the above, add the following after implementing the technique:

If you want to add a common technique (for common labels; that can be applied to all labels):

Just go to "/src/main/supported_methods/common_labels.py" and add the name of the technique as an attribute to the CommonLabels class.
For example:

![](./common_technique.PNG)

All the labels will automatically support the newly added technique if it is actually implemented as mentioned above.

## Adding specials labels

If you want to add a special technique (for a special label; that can be applied only to this label):
- Go to "/src/main/supported_methods"
- Create a new python file which name is the name of the special label
- This file should contain a class that represents the special label and this class will extend the CommonLabels class
- Add the special technique as an attribute
- For example if we want to add the faceswap technique that can only be applied on the face label we should create a face.py file that should look like the following:

![](./special_technique.PNG)

This way, the face label will support all the common techniques in addition to the special one that will only be applied on it (faceswap) if the faceswap  technique is correctly implemented.

================================================
FILE: src/main/APIClient.py
================================================
import os
import time
import io
import sys
import json
import requests
import jsonschema
from exceptions import InvalidUrlConfiguration, ApplicationError


class ApiClient:
    def __init__(self):
        self.configuration = []
        self.url_list = self.get_url_configuration()
        self.get_api_configuration()

    def get_configuration(self):
        try:
            return self.configuration
        except ApplicationError as e:
            raise e

    @staticmethod
    def get_url_configuration():
        """
        :return: List of all the api urls provided in the url_configuration file
        """
        with open('../jsonFiles/url_configuration.json') as f:
            data = json.load(f)
            urls = data["urls"]
            try:
                validate_url_configuration(data)
            except Exception as e:
                raise InvalidUrlConfiguration
            return urls

    def get_api_configuration(self):
        for url in self.url_list:
            self.get_models(url)

    @staticmethod
    def get_model_names(url: str):
        time.sleep(5)
        response = requests.get(
            url=url + "models")
        models_list = response.json()["data"]["models"]
        return models_list

    def get_models(self, url: str):
        """
        Returns a list of json objects representing the configuration of each api
        corresponding to each url in the url_configuration file
        :param url: Each url in the url_configuration file
        :return: List of json objects
        """
        models_list = self.get_model_names(url)
        for model_name in models_list:
            labels_list = self.get_labels(url, model_name)
            model_type = self.get_model_configuration(url, model_name)
            palette = None
            if "segmentation" in model_type:
                palette = self.get_palette(url, model_name)
            self.configuration.append({
                "name": model_name,
                "labels": labels_list,
                "type": model_type,
                "url": url,
                "palette": palette
            })

    @staticmethod
    def get_palette(url: str, model_name: str):
        response = requests.get(
            url=url + "models/" + model_name + "/palette"
        )
        return response.json()["data"]

    @staticmethod
    def get_labels(url: str, model_name: str):
        response = requests.get(
            url=url + "models/" + model_name + "/labels"
        )
        return response.json()["data"]

    @staticmethod
    def get_model_configuration(url: str, model_name: str):
        response = requests.get(
            url=url + "models/" + model_name + "/config"
        )
        return response.json()["data"]["type"]

    @staticmethod
    def get_detection_response(url: str, model_name: str, im):
        response = requests.post(
            url=url + "models/" + model_name + "/predict",
            files={'input_data': io.BytesIO(im.tobytes())})
        return response.json()

    @staticmethod
    def get_segmentation_response(url: str, model_name: str, im):
        response = requests.post(
            url=url + "models/" + model_name + "/inference",
            files={'input_data': io.BytesIO(im.tobytes())}
        )
        return response


def validate_url_configuration(data):
    """
    Validate the url_configuration file by comparing it to the urlConfigurationSchema
    :param data: The data from the url_configuration file
    """
    with open('urlConfigurationSchema') as f:
        schema = json.load(f)
    try:
        jsonschema.validate(data, schema)
    except Exception as e:
        raise InvalidUrlConfiguration(e)


================================================
FILE: src/main/ConfigurationSchema.json
================================================
{
    "type": "object",
    "properties": 
    {
      "sensitive_info": 
      {
        "type": "array",
        "items": [
          {
            "type": "object",
            "properties": 
            {
              "model_name": 
              {
                "type": "string"
              },
              "class_name": 
              {
                "type": "string"
              },
              "anonymization_technique": 
              {
                "type": "string"
              },
              "inference_type": 
              {
                "type": "string"
              },
              "anonymization_degree": 
              {
                "type": "number",
                "minimum": 0,
                "maximum": 1
              }
            },
            "required": [
              "model_name",
              "class_name",
              "anonymization_technique",
              "inference_type",
              "anonymization_degree"
            ]
          }
        ]
      }
    },
    "required": [
      "sensitive_info"
    ]
}  

================================================
FILE: src/main/__init__.py
================================================


================================================
FILE: src/main/anonymization/__init__.py
================================================


================================================
FILE: src/main/anonymization/base_anonymization.py
================================================
from abc import ABC, abstractmethod


class BaseAnonymization(ABC):
    """
    Base anonymization class for the detection and the semantic anonymization
    """
    @abstractmethod
    def blurring(self, image, response, degree=None, id=None, mask=None):
        pass

    @abstractmethod
    def pixelating(self, image, response, degree=None, id=None, mask=None):
        pass

    @abstractmethod
    def blackening(self, image, response, degree=None, id=None, mask=None):
        pass


================================================
FILE: src/main/anonymization/detection_anonymization.py
================================================
from anonymization.base_anonymization import BaseAnonymization
from PIL import ImageFilter, Image


def find_boxes(bbox):
    nb = []
    for i in bbox:
        nb.append(i)
    return nb


class DetectionAnonymization(BaseAnonymization):
    def __init__(self):
        pass

    def blurring(self, image, response, degree=None, id=None, mask=None):
        """
        Blur the detected objects based on the user's requirements
        :param image: input image
        :param response: The response parsed from the object detection api
        :param degree: The degree of the anonymization (specified in the user_configuration file)
        :param id:
        :param mask:
        :return: The anonymized image
        """
        boxes = find_boxes(response)
        for i in boxes:
            cropped_image = image.crop((i[0], i[1], i[2], i[3]))
            blurred_image = cropped_image.filter(ImageFilter.GaussianBlur(25*float(degree)))
            image.paste(blurred_image, (i[0], i[1], i[2], i[3]))
        return image

    def pixelating(self, image, response, degree=None, id=None, mask=None):
        """
        Pixelate the detected objects based on the user's requirements
        :param image: input image
        :param response: The response parsed from the object detection api
        :param degree: The degree of the anonymization (specified in the user_configuration file)
        :param id:
        :param mask:
        :return: The anonymized image
        """
        boxes = find_boxes(response)
        for i in boxes:
            cropped_image = image.crop((i[0], i[1], i[2], i[3]))
            w, h = cropped_image.size
            small = cropped_image.resize((int(w / (float(degree) * w)), int(h / (float(degree) * h))), Image.BILINEAR)
            result = small.resize(cropped_image.size, Image.NEAREST)
            image.paste(result, (i[0], i[1], i[2], i[3]))
        return image

    def blackening(self, image, response, degree=None, id=None, mask=None):
        """
        Blacken the detected objects based on the user's requirements
        :param image: input image
        :param response: The response parsed from the object detection api
        :param degree: The degree of the anonymization (specified in the user_configuration file)
        :param id:
        :param mask:
        :return: The anonymized image
        """
        boxes = find_boxes(response)
        for i in boxes:
            cropped = image.crop((i[0], i[1], i[2], i[3]))
            h, w = cropped.size
            black = Image.new(str(image.mode), (h, w), 'black')
            result = Image.blend(cropped, black, float(degree))
            cropped.paste(result)
            image.paste(cropped, (i[0], i[1], i[2], i[3]))
        return image


================================================
FILE: src/main/anonymization/segmentation_anonymization.py
================================================
from anonymization.base_anonymization import BaseAnonymization
import os
from PIL import ImageFilter, Image
import numpy as np
import cv2
import io

class SegmentationAnonymization(BaseAnonymization):
    def __init__(self):
        pass

    def blurring(self, image, response, degree=None, id=None, mask=None):
        """
        Blur the segmented objects based on the user's requirements
        :param image: input image
        :param response: The response parsed from the semantic segmentation api
        :param degree: The degree of the anonymization (specified in the user_configuration file)
        :param id: The id of the segmented class
        :param mask: The mask we will apply the anonymization on
        :return: The anonymized image
        """
        cropped = image.crop((response[0], response[1], response[2], response[3]))
        blurred = cropped.filter(ImageFilter.GaussianBlur(25 * float(degree)))
        mask = Image.open(io.BytesIO(mask.content))
        img_array = np.array(mask)
        img=Image.fromarray(img_array)
        im = img.crop((response[0], response[1], response[2], response[3]))
        rgb_image=np.array(im.convert(mode="RGB"))
        src=cv2.cvtColor( rgb_image, cv2.COLOR_RGB2BGR)
        tmp = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
        ex1=cv2.inRange(tmp,int(id),int(id))
        ex, alpha = cv2.threshold(ex1, 0, 255, cv2.THRESH_BINARY)
        b, g, r = cv2.split(src)
        rgba = [b, g, r, alpha]
        dst = cv2.merge(rgba, 4)
        test=Image.fromarray(np.array(dst))
        image.paste(blurred, (response[0], response[1], response[2], response[3]), mask=test)
        return image

    def pixelating(self, image, response, degree=None, id=None, mask=None):
        """
         Pixelate the segmented objects based on the user's requirements
         :param image: input image
         :param response: The response parsed from the semantic segmentation api
         :param degree: The degree of the anonymization (specified in the user_configuration file)
         :param id: The id of the segmented class
         :param mask: The mask we will apply the anonymization on
         :return: The anonymized image
         """
        cropped = image.crop((response[0], response[1], response[2], response[3]))
        mask = Image.open(io.BytesIO(mask.content))
        img_array = np.array(mask)
        img=Image.fromarray(img_array)
        im = img.crop((response[0], response[1], response[2], response[3]))
        rgb_image=np.array(im.convert(mode="RGB"))
        src=cv2.cvtColor( rgb_image, cv2.COLOR_RGB2BGR)
        tmp = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
        ex1=cv2.inRange(tmp,int(id),int(id))
        ex, alpha = cv2.threshold(ex1, 0, 255, cv2.THRESH_BINARY)
        b, g, r = cv2.split(src)
        rgba = [b, g, r, alpha]
        dst = cv2.merge(rgba, 4)
        test=Image.fromarray(np.array(dst))
        w, h = cropped.size
        small = cropped.resize((int(w / (float(degree) * w)), int(h / (float(degree) * h))), Image.BILINEAR)
        result = small.resize(cropped.size, Image.NEAREST)
        image.paste(result, (response[0], response[1], response[2], response[3]), mask=test)
        return image

    def blackening(self, image, response, degree=None, id=None, mask=None):
        """
         Blacken the segmented objects based on the user's requirements
         :param image: input image
         :param response: The response parsed from the semantic segmentation api
         :param degree: The degree of the anonymization (specified in the user_configuration file)
         :param id: The id of the segmented class
         :param mask: The mask we will apply the anonymization on
         :return: The anonymized image
         """
        cropped = image.crop((response[0], response[1], response[2], response[3]))
        mask = Image.open(io.BytesIO(mask.content))
        img_array = np.array(mask)
        img=Image.fromarray(img_array)
        im = img.crop((response[0], response[1], response[2], response[3]))
        rgb_image=np.array(im.convert(mode="RGB"))
        src=cv2.cvtColor( rgb_image, cv2.COLOR_RGB2BGR)
        tmp = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
        ex1=cv2.inRange(tmp,int(id),int(id))
        ex, alpha = cv2.threshold(ex1, 0, 255, cv2.THRESH_BINARY)
        b, g, r = cv2.split(src)
        rgba = [b, g, r, alpha]
        dst = cv2.merge(rgba, 4)
        test=Image.fromarray(np.array(dst))
        h, w = cropped.size
        black = Image.new(str(image.mode), (h, w), 'black')
        result = Image.blend(cropped, black, float(degree))
        cropped.paste(result)
        image.paste(cropped, (response[0], response[1], response[2], response[3]), mask=test)
        return image


================================================
FILE: src/main/anonymization_service.py
================================================
import io
import os
import cv2
import sys
import numpy as np
from PIL import Image
from io import BytesIO
from datetime import datetime
from APIClient import ApiClient
from fastapi import File, UploadFile
from exceptions import ApplicationError, InvalidInputData
from strategy_context import StrategyContext
from helpers import get_user_models
import helpers
import moviepy.editor

sys.path.append("anonymization")


class AnonymizationService:

    def __init__(self):
        self.strategy_context = StrategyContext()

    def anonymize(self, image: UploadFile = File(...), configuration: UploadFile = File(...)):
        """
        Calls the correct anonymization method based on the model type and the technique
        :param image: Input image
        :param configuration: user configuration file
        :return: File response representing the anonymized image
        """
        result = None
        im = Image.open(image.file).convert('RGB')
        rgb_image_0 = np.array(im)
        bgr_image_0 = cv2.cvtColor(rgb_image_0, cv2.COLOR_RGB2BGR)
        response = []
        configuration_path = '../jsonFiles/user_configuration.json'

        with open(configuration_path, 'wb') as config:
            config.write(configuration.file.read())
        try:
            users_models = get_user_models(configuration_path)
        except ApplicationError as e:
            raise e
        _, im_png = cv2.imencode(".png", bgr_image_0)
        errors = []
        i = 0
        for each in users_models:
            i = i + 1
            try:
                response, mask = getattr(helpers, "parse_" + each["model_type"] + "_response")(each, im_png, i, errors)
            except Exception as e:
                errors.append(
                    "The model type <" + each["model_type"] + "> in sensitive info <" + str(i) + "> is not supported.")
            if response:
                if not errors:
                    for r in response:
                        inference_type = r['type']
                        technique = r['technique']
                        box = r['boxes']
                        degree = r['degree']
                        label_id = r['label_id']
                        anonymization_name = inference_type + "_anonymization"
                        anonymization_class = anonymization_name.title().replace("_", "")
                        try:
                            result = self.strategy_context.anonymize(
                                getattr(__import__(anonymization_name), anonymization_class)(),
                                technique=technique, image=im, response=box,
                                degree=degree, label_id=label_id, mask=mask)
                        except ApplicationError as e:
                            raise e
                    rgb_image = np.array(result)
                    bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)
                else:
                    bgr_image = None
            else:
                rgb_image = np.array(im)
                bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)
        return bgr_image, errors

    def anonymize_video(self, video: UploadFile = File(...), configuration: UploadFile = File(...)):
        result = None
        configuration_path = '../jsonFiles/user_configuration.json'
        with open(configuration_path, 'wb') as config:
            config.write(configuration.file.read())
        try:
            users_models = get_user_models(configuration_path)
        except ApplicationError as e:
            raise e
        response = []
        with open('video.mp4', 'wb') as v:
            try:
                v.write(video.file.read())
            except Exception as e:
                raise InvalidInputData(e)
            initial_video = moviepy.editor.VideoFileClip("video.mp4")
            initial_video_audio = initial_video.audio
            cap = cv2.VideoCapture('video.mp4')
            fps = cap.get(cv2.CAP_PROP_FPS)
            i = 0
            while cap.isOpened():
                ret, frame = cap.read()
                if ret is True:
                    im = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                    img = Image.fromarray(im)
                    bgr_image_0 = cv2.cvtColor(im, cv2.COLOR_RGB2BGR)
                    _, im_png = cv2.imencode(".png", bgr_image_0)
                    path = os.getcwd() + "/frames/" + str(i) + ".jpg"
                    i = i + 1
                    errors = []
                    n = 0
                    for each in users_models:
                        n = n + 1
                        result = None
                        try:
                            response, mask = getattr(helpers, "parse_" + each["model_type"] + "_response")(each, im_png,
                                                                                                           n, errors)
                        except Exception as e:
                            errors.append("The model type <" + each["model_type"] + "> in sensitive info <" + str(
                                n) + "> is not supported.")
                        if response:
                            if not errors:
                                for r in response:
                                    inference_type = r['type']
                                    technique = r['technique']
                                    box = r['boxes']
                                    degree = r['degree']
                                    label_id = r['label_id']
                                    anonymization_name = inference_type + "_anonymization"
                                    anonymization_class = anonymization_name.title().replace("_", "")
                                    try:
                                        result = self.strategy_context.anonymize(
                                            getattr(__import__(anonymization_name), anonymization_class)(),
                                            technique=technique, image=img, response=box,
                                            degree=degree, label_id=label_id, mask=mask)
                                    except ApplicationError as e:
                                        raise e
                                result.save(path)
                            else:
                                return errors
                        else:
                            img.save(path)
                else:
                    break
            print("Processing ...")
            images = [img for img in os.listdir(os.getcwd() + "/frames") if img.endswith(".jpg")]
            sort = []
            for frame in images:
                name = frame.split(".")[0]
                image_number = int(name)
                sort.append(image_number)
            sort = sorted(sort)
            frame = cv2.imread(os.path.join(os.getcwd() + "/frames", str(sort[0]) + ".jpg"))
            height, width, layers = frame.shape
            fourcc = cv2.VideoWriter_fourcc(*'MP4V')
            output_video_dir = "Anonymized_" + video.filename.split(".")[0] + "_" + datetime.now().strftime(
                "%d_%m_%y_%H_%M_%S")
            video = cv2.VideoWriter('anonymized_video/' + output_video_dir + '.mp4', fourcc, fps, (width, height))
            for image in sort:
                video.write(cv2.imread(os.path.join(os.getcwd() + "/frames", str(image) + ".jpg")))
                os.remove(os.getcwd() + "/frames/" + str(image) + ".jpg")
            cv2.destroyAllWindows()
            video.release()
            anonymized_video = moviepy.editor.VideoFileClip('anonymized_video/' + output_video_dir + '.mp4')
            anonymized_video.audio = initial_video_audio
            os.remove('anonymized_video/' + output_video_dir + '.mp4')
            anonymized_video.write_videofile('anonymized_video/' + output_video_dir + '.mp4')
            os.remove(os.getcwd() + "/video.mp4")
            return "Done"


================================================
FILE: src/main/anonymized_video/.gitignore
================================================
# Ignore everything in this directory
*
# Except this file
!.gitignore


================================================
FILE: src/main/config.py
================================================
master_dict={}


================================================
FILE: src/main/exceptions.py
================================================
__metaclass__ = type


class ApplicationError(Exception):
    """Base class for other exceptions"""

    def __init__(self, default_message, additional_message=''):
        self.default_message = default_message
        self.additional_message = additional_message

    def __str__(self):
        return self.get_message()

    def get_message(self):
        return self.default_message if self.additional_message == '' else "{}: {}".format(self.default_message,
                                                                                          self.additional_message)


class InvalidModelConfiguration(ApplicationError):
    """Raised when the model's configuration is corrupted"""

    def __init__(self, additional_message=''):
        super().__init__('Invalid model configuration', additional_message)


class InvalidInputData(ApplicationError):
    """Raised when the input data is corrupted"""

    def __init__(self, additional_message=''):
        super().__init__('Invalid input data', additional_message)


class InvalidUrlConfiguration(ApplicationError):
    """Raised when the model's configuration is corrupted"""

    def __init__(self, additional_message=''):
        super().__init__('Invalid url configuration', additional_message)

================================================
FILE: src/main/helpers.py
================================================
import io
import cv2
import json
import config
import requests
import jsonschema
import numpy as np
from PIL import Image
from APIClient import ApiClient
from labels import labels_methods
from exceptions import ApplicationError, InvalidModelConfiguration

# master_dict = labels_methods()
master_dict = config.master_dict


def get_user_models(configuration_path):
    """ Returns a list of json objects that represent the sensitive info given by the user in
    the configuration file
    :param configuration_path: The user configuration path
    :return: List of json objects
    """
    user_models = []
    with open(configuration_path) as f:
        try:
            data = json.load(f)
        except Exception:
            raise InvalidModelConfiguration("Json file corrupted")
    try:
        validate_json_configuration(data)
    except ApplicationError as e:
        raise e
    for info in data['sensitive_info']:
        url = info.get('url')
        models = {
            'url': url,
            'model_name': info['model_name'],
            'label_name': info['class_name'],
            'model_type': info['inference_type'].casefold(),
            'technique': info['anonymization_technique'].casefold(),
            'degree': info['anonymization_degree']
        }
        user_models.append(models)
    return user_models


def parse_inference_response(inference_type, user_config, im, i, errors):
    """ Either returns the response from the inference api or returns an array with the user's configuration file errors
    :param inference_type: i.e detection
    :param user_config: a json object that represent each sensitive info specified by the user
    :param im: the image object that we need to anonymize
    :param i: the index of the sensitive info that we are using
    :param errors: a list that will be filled in case any error in the user's configuration file is present
    :return: inference api response or the list of errors
    """
    master_dict = config.master_dict
    # getting all the supported labels
    labels = list(master_dict.keys())
    # checking if the user's requested label is present in the supported ones
    if user_config["label_name"] in labels:
        # checking if the user's label is supported by the user's specified model type
        if inference_type in master_dict[user_config["label_name"]].keys():
            # getting the urls and the models that supports the user's label and that are compatible with the user's specified model type at the same time
            urls = list(
                master_dict[user_config["label_name"]][inference_type].keys())
            models = list(
                master_dict[user_config["label_name"]][inference_type].values())
            # we check each possible case that can occur:
            # 1- the url is specified in the user's configuration file, this url is present in the matching urls, and the inference type is correct
            if user_config["url"] is not None and user_config["url"] in urls and user_config[
                "model_type"] == inference_type:
                # if all the conditions above are true, we now check if the label is supported by the user's specified url and if the anonymization technique is applicable to this label
                if user_config["model_name"] in master_dict[user_config["label_name"]][user_config["model_type"]][
                    user_config["url"]] and user_config["technique"] in master_dict[user_config["label_name"]][
                    "technique"]:
                    # in this case, we send a request to the inference api to get the response
                    response = getattr(ApiClient, "get_" + inference_type + "_response")(user_config['url'],
                                                                                         user_config['model_name'], im)
                    if inference_type == "segmentation":
                        labels_list = ApiClient.get_labels(user_config['url'], user_config["model_name"])
                        #palette = ApiClient.get_palette(user_config['url'], user_config['model_name'])
                        json_array = get_bbs(response, labels_list, user_config)
                        return response, json_array
                    else:
                        return response
                # here we are filling the errors list with all the errors that are present in the sensitive info and the index of this info in case one of the conditions above is false
                elif user_config["model_name"] not in master_dict[user_config["label_name"]][user_config["model_type"]][
                    user_config["url"]]:
                    errors.append("The model <" + user_config["model_name"] + "> in sensitive info <" + str(
                        i) + "> is not available in the " + inference_type + " url : <" + user_config[
                                      "url"] + "> for the label <" + user_config["label_name"] + ">.")
                elif user_config["technique"] not in master_dict[user_config["label_name"]]["technique"]:
                    errors.append("The technique <" + user_config["technique"] + "> is not supported for the label <" +
                                  user_config["label_name"] + "> in sensitive info <" + str(i) + ">.")
            # 2- the url is specified in the user's configuration file, this url is not between the matching urls, and the inference type is correct
            elif user_config["url"] is not None and user_config["url"] not in urls and user_config[
                "model_type"] == inference_type:
                errors.append("the url <" + user_config["url"] + "> does not belong to the list of urls supported")

            # 3- the user hasn't specified any url and the inference type is correct
            elif user_config["url"] is None and user_config["model_type"] == inference_type:
                model_not_found = True
                for val in models:
                    if user_config["model_name"] in val:
                        inde = models.index(val)
                        model_not_found = False

                # we check if the model specified is between the matching models and if the technique specified is applicable to this label
                if not model_not_found and user_config["model_name"] in models[inde] and user_config["technique"] in master_dict[user_config["label_name"]]["technique"]:
                    # we choose the first matching url
                    response = getattr(ApiClient, "get_" + inference_type + "_response")(urls[inde],
                                                                                         user_config['model_name'], im)
                    if inference_type == "segmentation":
                        labels_list = ApiClient.get_labels(urls[inde], user_config["model_name"])
                        #palette = ApiClient.get_palette(urls[inde], user_config['model_name'])
                        json_array = get_bbs(response, labels_list, user_config)
                        return response, json_array
                    else:
                        return response
                # here we are filling the errors list with all the errors that are present in the sensitive info and the index of this info in case one of the conditions above is false
                elif model_not_found:
                    errors.append("The model <" + user_config["model_name"] + "> in sensitive info <" + str(
                        i) + "> is not available in the " + inference_type + " for label  <" + user_config[
                                      "label_name"] + ">")
                elif user_config["technique"] not in master_dict[user_config["label_name"]]["technique"]:
                    errors.append("The technique <" + user_config["technique"] + "> is not supported for the label <" +
                                  user_config["label_name"] + "> in sensitive info <" + str(i) + ">.")
        else:
            errors.append("The label <" + user_config["label_name"] + "> in the sensitive info <" + str(
                i) + "> is not supported by a " + inference_type + " api.")
    else:
        errors.append("The label <" + user_config["label_name"] +
                      "> in the sensitive info <" + str(i) + "> is not supported.")
    return errors


def get_bbs(image, labels_list, user_config):
    image = Image.open(io.BytesIO(image.content))
    palette=image.getpalette()
    label_id = labels_list.index(user_config['label_name'])
    rgb_image = np.array(image.convert(mode="RGB"))
    bgr_image = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2BGR)
    response = []

    lower = np.array([palette[(int(label_id) * 3) + 2], palette[(int(label_id) * 3) + 1], palette[(int(label_id) * 3)]])
    mask1 = cv2.inRange(bgr_image, lower, lower)
    contours, _ = cv2.findContours(mask1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
    boxes = []
    for c in contours:
        (x, y, w, h) = cv2.boundingRect(c)
        if (len(np.array(c).flatten()) > 3):
            boxes.append([x, y, x + w, y + h])
    for b in boxes:
        response.append({"ObjectClassId": label_id, "class_name": user_config["label_name"], "bbox": b})
    return response


def parse_detection_response(user_config, im, i, errors):
    """ Parse the object detection api response and returns a list of json objects
    representing the parsed response in addition to some useful user sensitive info
    :param user_config: a json object that represent each sensitive info specified by the user
    :param im: the image we need to anonymize
    :param i: index of the sensitive info
    :param errors: list of errors that will be filled in case any error is the user's configuration file is present
    :return: List of json objects
    """
    boxes = []
    bounding_boxes = []
    response = parse_inference_response("detection", user_config, im, i, errors)
    if not errors:
        for bbox in response['data']['bounding-boxes']:
            if bbox['ObjectClassName'] == user_config['label_name']:
                bounds = [bbox['coordinates']['left'],
                          bbox['coordinates']['top'],
                          bbox['coordinates']['right'],
                          bbox['coordinates']['bottom']]
                boxes.append(bounds)
                bounding_boxes.append({'type': user_config['model_type'],
                                       'technique': user_config['technique'],
                                       'boxes': boxes,
                                       'degree': user_config['degree'],
                                       'label_id': None})
        return bounding_boxes, None
    else:
        return response, None


def parse_segmentation_response(user_config, im, i, errors):
    """ Parse the semantic segmentation api response and returns a list of json objects
    representing the parsed response in addition to some useful user sensitive info
    :param user_config: a json object that represent each sensitive info specified by the user
    :param im: the image we need to anonymize
    :param i: index of the sensitive info
    :param errors: list of errors that will be filled in case any error is the user's configuration file is present
    :return: List of json objects
    """
    bounding_boxes = []
    r = parse_inference_response("segmentation", user_config, im, i, errors)
    try:
        mask, response = r
    except Exception as e:
        response = r
    if not errors:
        for data in response:
            if data['class_name'] == user_config['label_name']:
                result = {
                    'boxes': data['bbox'],
                    'type': user_config['model_type'],
                    'technique': user_config['technique'],
                    'degree': user_config['degree'],
                    'label_id': data['ObjectClassId']
                }
                bounding_boxes.append(result)
        return bounding_boxes, mask
    else:
        return response, None


def validate_json_configuration(data):
    """Validate the user configuration file by comparing it to the ConfigurationSchema
    :param data: The data from the user configuration file
    """
    with open('ConfigurationSchema.json') as f:
        schema = json.load(f)
    try:
        jsonschema.validate(data, schema)
    except Exception as e:
        raise InvalidModelConfiguration(e)


def check_api_availability(url: str):
    try:
        response = requests.get(url + "models")
    except Exception:
        raise Exception("wrong url format. expected format: http://ip:port/")
    return response


def parse_json(json_path):
    with open(json_path, 'r') as f:
        try:
            payload = json.load(f)
        except Exception:
            raise Exception("Json file corrupted")
    return payload


def write_json(payload, json_path):
    with open(json_path, 'w') as outfile:
        json.dump(payload, outfile)


================================================
FILE: src/main/labels.py
================================================
import os
import sys
import json
from APIClient import ApiClient

sys.path.append("supported_methods")


def labels_methods():
    """
    Returns a list of the available labels with their available anonymization methods, the urls
    that supports them, the type of each url and the model names
    :return: List of json objects
    """
    api_client = ApiClient()
    labels = []
    master_dict = {}
    types = []
    urls = []
    for model in api_client.configuration:
        types.append(model['type'])
        urls.append(model['url'])

        for label in model["labels"]:
            labels.append(label)

    master_dict = master_dict.fromkeys(labels)

    for label in labels:
        url_type = {}
        for type in types:
            #            models = []
            for url in urls:
                models = []
                for model in api_client.configuration:
                    if model["type"] == type and model["url"] == url and label in model["labels"]:
                        models.append(model['name'])
                        if model['type'] not in url_type.keys():
                            url_type[model['type']] = {}
                        url_type[model['type']][model['url']] = models
                        master_dict[label] = url_type

    special_labels = []
    for file in os.listdir(os.path.join(os.getcwd(), "supported_methods")):
        if file != 'common_labels.py' and file != '__init__.py' and file != '__pycache__':
            special_label = file.split(".")[0]
            special_labels.append(special_label)

    for user_label in labels:
        if user_label in special_labels:
            class_name = user_label.title()
            x = getattr(__import__(user_label), class_name)()
            attr = getattr(x, "get_labels")(class_name)
            master_dict[user_label]['technique'] = attr

        else:
            class_name = user_label.title()
            x = getattr(__import__('common_labels'), 'CommonLabels')()
            attr = getattr(x, "get_labels")(class_name)
            master_dict[user_label]['technique'] = attr
    return master_dict


'''

Old version
import os
import sys
import json
from APIClient import ApiClient

sys.path.append("supported_methods")


def labels_methods():
    """
    Returns a list of the available labels with their available anonymization methods, the urls
    that supports them, the type of each url and the model names
    :return: List of json objects
    """
    api_client = ApiClient()
    labels = []
    master_dict = {}
    types = []
    for model in api_client.configuration:
        types.append(model['type'])
        for label in model["labels"]:
            labels.append(label)

    master_dict = master_dict.fromkeys(labels)

    for label in labels:
        url_type = {}
        for type in types:
            models = []
            for model in api_client.configuration:
                if model["type"] == type and label in model["labels"]:
                    models.append(model['name'])
                    url_type[model['type']] = {model['url']: models}
                    master_dict[label] = url_type

    special_labels = []
    for file in os.listdir(os.path.join(os.getcwd(), "supported_methods")):
        if file != 'common_labels.py' and file != '__init__.py' and file != '__pycache__':
            special_label = file.split(".")[0]
            special_labels.append(special_label)

    for user_label in labels:
        if user_label in special_labels:
            class_name = user_label.title()
            x = getattr(__import__(user_label), class_name)()
            attr = getattr(x, "get_labels")(class_name)
            master_dict[user_label]['technique'] = attr

        else:
            class_name = user_label.title()
            x = getattr(__import__('common_labels'), 'CommonLabels')()
            attr = getattr(x, "get_labels")(class_name)
            master_dict[user_label]['technique'] = attr
    return master_dict

'''


================================================
FILE: src/main/models.py
================================================
class ApiResponse:

    def __init__(self, success=True, data=None, error=None):
        """
        Defines the response shape
        :param success: A boolean that returns if the request has succeeded or not
        :param data: The model's response
        :param error: The error in case an exception was raised
        """
        self.data = data
        self.error = error.__str__() if error is not None else ''
        self.success = success


================================================
FILE: src/main/start.py
================================================
import io
import cv2
from models import ApiResponse
from labels import labels_methods
from exceptions import ApplicationError
from fastapi import UploadFile, File, FastAPI, Form
from fastapi.responses import StreamingResponse
from anonymization_service import AnonymizationService
import helpers
import config

app = FastAPI(version='2.0', title='BMW Anonymization API',
              description="<h3><b>API that localizes and obfuscates sensitive information in images/videos in order to preserve the individuals anonymity.</b></h3>"
						  "<b>Developers:</b></br>"
                          "<b>Ghenwa Aoun</b>"
                          "</br> "
						  "<b>Antoine Charbel</b></br>"
                          "<b>Jimmy Tekli</b>"
                          "</br>"
                          "<br><b>Contact us:</b></br>"
						  "<b>BMW Innovation Lab: <a href='mailto:innovation-lab@bmw.de'>innovation-lab@bmw.de</a></b>")
anonymizationservice = AnonymizationService()
config.master_dict = labels_methods()

url_config_path = "../jsonFiles/url_configuration.json"

@app.get('/list_urls', tags=["Configuration"])
def list_urls():
    """
    list all available urls in the url json file
    :return: ApiResponse
    """
    try:
        payload = helpers.parse_json(url_config_path)
    except Exception as e:
        return ApiResponse(success=False, error=e)
    return ApiResponse(data=payload)


@app.post('/set_url', tags=["Configuration"])
def set_url(url: str = Form(...)):
    """
    Add url to the url json file
    :param url: api url in the format: http://ip:port/
    :return: ApiResponse
    """
    try:
        payload = helpers.parse_json(url_config_path)
        response = helpers.check_api_availability(url)
    except Exception as e:
        return ApiResponse(success=False, error=e)
    if response.status_code == 200:
        if url not in payload['urls']:
            payload['urls'].append(url)
            helpers.write_json(payload, url_config_path)
            data = "url added successfully"
        else:
            data = "url already exist"
        return ApiResponse(data=data)
    else:
        return ApiResponse(success=False, error="url trying to add is not reachable")


@app.post('/remove_url', tags=["Configuration"])
def remove_url(url: str = Form(...)):
    """
        Remove url from the url json file
        :param url: api url in the format: http://ip:port/
        :return: ApiResponse
        """
    try:
        payload = helpers.parse_json(url_config_path)
    except Exception as e:
        return ApiResponse(success=False, error=e)
    if url in payload['urls']:
        payload['urls'].remove(url)
        helpers.write_json(payload, url_config_path)
        return ApiResponse(data={"url removed successfully"})
    else:
        return ApiResponse(success=False, error="url is not present in config file")


@app.post('/remove_all_urls', tags=["Configuration"])
def remove_all_urls():
    """
    Remove all available urls in the url json file
    :return: ApiResponse
    """
    payload = {"urls": []}
    helpers.write_json(payload, url_config_path)
    return ApiResponse(data="all urls removed successfully")


@app.get('/available_methods/', tags=["Configuration"])
def get_available_methods():
    """
    :return: A list that shows the model name, the urls and the model types that support each label in addition to the anonymization techniques that can be applied to each of them
    """
    try:
        config.master_dict = labels_methods()
        return config.master_dict
    except Exception:
        return ApiResponse(success=False, error='unexpected server error')


@app.post('/anonymize/', tags=["Anonymization"])
def anonymize(image: UploadFile = File(...), configuration: UploadFile = File(...)):
    """
    Anonymize the given image
    :param image: Image file
    :param configuration: Json file
    :return: The anonymized image
    """
    try:
        result, errors = anonymizationservice.anonymize(image, configuration)
        if not errors:
            _, im_png = cv2.imencode(".png", result)
            response = StreamingResponse(io.BytesIO(im_png.tobytes()), media_type="image/jpeg")
            return response
        else:
            return ApiResponse(success=False,
                               error="Some data in your configuration file need to be modified. Check the /available_methods/ endpoint",
                               data=errors)
    except ApplicationError as e:
        return ApiResponse(success=False, error=e)
    except Exception:
        return ApiResponse(success=False, error='unexpected server error')


@app.post('/anonymize_video/', tags=["Anonymization"])
def anonymize_video(video: UploadFile = File(...), configuration: UploadFile = File(...)):
    """
    Anonymize the given video  and save it to src/main/anonymized_video as original_video_name_TIMESTAMP.mp4
    :param video: Video file
    :param configuration: Json file
    """
    try:
        return anonymizationservice.anonymize_video(video, configuration)
    except ApplicationError as e:
        return ApiResponse(success=False, error=e)
    except Exception:
        return ApiResponse(success=False, error='unexpected server error')


================================================
FILE: src/main/strategy_context.py
================================================
from anonymization.base_anonymization import BaseAnonymization


class StrategyContext:
    def __init__(self):
        pass

    def anonymize(self, detection_type: BaseAnonymization, technique: str, image, response, degree,label_id, mask):
        """
        :param detection_type: Either it is semantic segmentation or object detection
        :param technique: The anonymization method
        :param image: Input image
        :param response: The bounding boxes taken from the output of the inference api
        :param degree: The degree used to specify the opacity of the anonymization
        :param label_id: The id of the detected class
        :param mask: The mask used to apply the anonymzation
        :return:
        """
        return getattr(detection_type, technique)(image, response, degree, label_id, mask)


================================================
FILE: src/main/supported_methods/__init__.py
================================================


================================================
FILE: src/main/supported_methods/common_labels.py
================================================
class CommonLabels:
    def __init__(self):
        self.blackening = None
        self.pixelating = None
        self.blurring = None

    def get_labels(self, label_name):
        methods = []
        for key, value in self.__dict__.items():
            methods.append(key)
        return methods


================================================
FILE: src/main/urlConfigurationSchema
================================================
{
    "type": "object",
    "required": [
        "urls"
    ],
    "properties": {
        "urls": {
            "type": "array",
            "items": {
                "type": "string"
            }
        }
    }
}

================================================
FILE: testing_script/test.py
================================================
import os
import sys
import time
import requests

start_time = time.time()

url = "http://ip:port/anonymize/"
j = 0
list_of_images = os.listdir("large")

for i in list_of_images:
    j = j + 1
    files = {"image": open("large/" + i, 'rb'),
             "configuration": open('user_configuration.json', 'rb')}
    response = requests.post(url, files=files)
    print("Total time: " + str(time.time() - start_time))
    with open("results/anonymized_" + i, 'wb') as f:
        f.write(response.content)
    if response.status_code != 200:
        print(time.time() - start_time)
        sys.exit("error")


================================================
FILE: testing_script/user_configuration.json
================================================
{
  "sensitive_info": [
    {
      "model_name": "yolo",
      "class_name": "person",
      "anonymization_technique": "blackening",
      "inference_type": "detection",
      "anonymization_degree": 1
    }
  ]
}


================================================
FILE: url_for_openvino_compose/url_configuration.json
================================================
{"urls": ["http://openvino_detection_api:80/", "http://openvino_segmentation_api:80/"]}
Download .txt
gitextract_hvd0l8rm/

├── .gitignore
├── LICENSE
├── README.md
├── docker/
│   ├── dockerfile
│   └── requirements.txt
├── docker-compose.yml
├── docker-compose_tf_gluoncv.yml
├── docker_compose_readme.md
├── jsonFiles/
│   ├── url_configuration.json
│   └── user_configuration.json
├── references/
│   └── techniques.md
├── src/
│   └── main/
│       ├── APIClient.py
│       ├── ConfigurationSchema.json
│       ├── __init__.py
│       ├── anonymization/
│       │   ├── __init__.py
│       │   ├── base_anonymization.py
│       │   ├── detection_anonymization.py
│       │   └── segmentation_anonymization.py
│       ├── anonymization_service.py
│       ├── anonymized_video/
│       │   └── .gitignore
│       ├── config.py
│       ├── exceptions.py
│       ├── helpers.py
│       ├── labels.py
│       ├── models.py
│       ├── start.py
│       ├── strategy_context.py
│       ├── supported_methods/
│       │   ├── __init__.py
│       │   └── common_labels.py
│       └── urlConfigurationSchema
├── testing_script/
│   ├── test.py
│   └── user_configuration.json
└── url_for_openvino_compose/
    └── url_configuration.json
Download .txt
SYMBOL INDEX (67 symbols across 12 files)

FILE: src/main/APIClient.py
  class ApiClient (line 11) | class ApiClient:
    method __init__ (line 12) | def __init__(self):
    method get_configuration (line 17) | def get_configuration(self):
    method get_url_configuration (line 24) | def get_url_configuration():
    method get_api_configuration (line 37) | def get_api_configuration(self):
    method get_model_names (line 42) | def get_model_names(url: str):
    method get_models (line 49) | def get_models(self, url: str):
    method get_palette (line 72) | def get_palette(url: str, model_name: str):
    method get_labels (line 79) | def get_labels(url: str, model_name: str):
    method get_model_configuration (line 86) | def get_model_configuration(url: str, model_name: str):
    method get_detection_response (line 93) | def get_detection_response(url: str, model_name: str, im):
    method get_segmentation_response (line 100) | def get_segmentation_response(url: str, model_name: str, im):
  function validate_url_configuration (line 108) | def validate_url_configuration(data):

FILE: src/main/anonymization/base_anonymization.py
  class BaseAnonymization (line 4) | class BaseAnonymization(ABC):
    method blurring (line 9) | def blurring(self, image, response, degree=None, id=None, mask=None):
    method pixelating (line 13) | def pixelating(self, image, response, degree=None, id=None, mask=None):
    method blackening (line 17) | def blackening(self, image, response, degree=None, id=None, mask=None):

FILE: src/main/anonymization/detection_anonymization.py
  function find_boxes (line 5) | def find_boxes(bbox):
  class DetectionAnonymization (line 12) | class DetectionAnonymization(BaseAnonymization):
    method __init__ (line 13) | def __init__(self):
    method blurring (line 16) | def blurring(self, image, response, degree=None, id=None, mask=None):
    method pixelating (line 33) | def pixelating(self, image, response, degree=None, id=None, mask=None):
    method blackening (line 52) | def blackening(self, image, response, degree=None, id=None, mask=None):

FILE: src/main/anonymization/segmentation_anonymization.py
  class SegmentationAnonymization (line 8) | class SegmentationAnonymization(BaseAnonymization):
    method __init__ (line 9) | def __init__(self):
    method blurring (line 12) | def blurring(self, image, response, degree=None, id=None, mask=None):
    method pixelating (line 40) | def pixelating(self, image, response, degree=None, id=None, mask=None):
    method blackening (line 70) | def blackening(self, image, response, degree=None, id=None, mask=None):

FILE: src/main/anonymization_service.py
  class AnonymizationService (line 20) | class AnonymizationService:
    method __init__ (line 22) | def __init__(self):
    method anonymize (line 25) | def anonymize(self, image: UploadFile = File(...), configuration: Uplo...
    method anonymize_video (line 81) | def anonymize_video(self, video: UploadFile = File(...), configuration...

FILE: src/main/exceptions.py
  class ApplicationError (line 4) | class ApplicationError(Exception):
    method __init__ (line 7) | def __init__(self, default_message, additional_message=''):
    method __str__ (line 11) | def __str__(self):
    method get_message (line 14) | def get_message(self):
  class InvalidModelConfiguration (line 19) | class InvalidModelConfiguration(ApplicationError):
    method __init__ (line 22) | def __init__(self, additional_message=''):
  class InvalidInputData (line 26) | class InvalidInputData(ApplicationError):
    method __init__ (line 29) | def __init__(self, additional_message=''):
  class InvalidUrlConfiguration (line 33) | class InvalidUrlConfiguration(ApplicationError):
    method __init__ (line 36) | def __init__(self, additional_message=''):

FILE: src/main/helpers.py
  function get_user_models (line 17) | def get_user_models(configuration_path):
  function parse_inference_response (line 47) | def parse_inference_response(inference_type, user_config, im, i, errors):
  function get_bbs (line 137) | def get_bbs(image, labels_list, user_config):
  function parse_detection_response (line 158) | def parse_detection_response(user_config, im, i, errors):
  function parse_segmentation_response (line 188) | def parse_segmentation_response(user_config, im, i, errors):
  function validate_json_configuration (line 219) | def validate_json_configuration(data):
  function check_api_availability (line 231) | def check_api_availability(url: str):
  function parse_json (line 239) | def parse_json(json_path):
  function write_json (line 248) | def write_json(payload, json_path):

FILE: src/main/labels.py
  function labels_methods (line 9) | def labels_methods():

FILE: src/main/models.py
  class ApiResponse (line 1) | class ApiResponse:
    method __init__ (line 3) | def __init__(self, success=True, data=None, error=None):

FILE: src/main/start.py
  function list_urls (line 28) | def list_urls():
  function set_url (line 41) | def set_url(url: str = Form(...)):
  function remove_url (line 65) | def remove_url(url: str = Form(...)):
  function remove_all_urls (line 84) | def remove_all_urls():
  function get_available_methods (line 95) | def get_available_methods():
  function anonymize (line 107) | def anonymize(image: UploadFile = File(...), configuration: UploadFile =...
  function anonymize_video (line 131) | def anonymize_video(video: UploadFile = File(...), configuration: Upload...

FILE: src/main/strategy_context.py
  class StrategyContext (line 4) | class StrategyContext:
    method __init__ (line 5) | def __init__(self):
    method anonymize (line 8) | def anonymize(self, detection_type: BaseAnonymization, technique: str,...

FILE: src/main/supported_methods/common_labels.py
  class CommonLabels (line 1) | class CommonLabels:
    method __init__ (line 2) | def __init__(self):
    method get_labels (line 7) | def get_labels(self, label_name):
Condensed preview — 33 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (89K chars).
[
  {
    "path": ".gitignore",
    "chars": 2041,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packagi"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 12699,
    "preview": "# BMW-Anonymization-Api\n\nData privacy and individuals’ anonymity are and always have been a major concern for data-drive"
  },
  {
    "path": "docker/dockerfile",
    "chars": 307,
    "preview": "FROM python:3.7\n\nCOPY docker/requirements.txt .\nCOPY src/main /main\n\nRUN apt-get update && apt-get install -y ffmpeg \\\n "
  },
  {
    "path": "docker/requirements.txt",
    "chars": 174,
    "preview": "moviepy==1.0.3\naiofiles==0.8.0\nfastapi==0.70.1\nopencv-python==4.5.4.60\njsonschema==3.2.0\nnumpy==1.21.0\npython-multipart="
  },
  {
    "path": "docker-compose.yml",
    "chars": 1237,
    "preview": "version: '3'\nservices:\n\n  openvino_detection_api:\n    build: \n      context: ../BMW-IntelOpenVINO-Detection-Inference-AP"
  },
  {
    "path": "docker-compose_tf_gluoncv.yml",
    "chars": 1153,
    "preview": "version: \"2.3\"\nservices:\n  detection_api:\n    image: tensorflow_inference_api_cpu:latest\n    build:\n      context: ../BM"
  },
  {
    "path": "docker_compose_readme.md",
    "chars": 5057,
    "preview": "# Deploying the BMW-Anonymization-Api with docker compose\n\nIn this section, docker compose will build and run a network "
  },
  {
    "path": "jsonFiles/url_configuration.json",
    "chars": 20,
    "preview": "{\n  \"urls\": [\n  ]\n}\n"
  },
  {
    "path": "jsonFiles/user_configuration.json",
    "chars": 227,
    "preview": "{\n  \"sensitive_info\": [\n    {\n      \"model_name\": \"sample_model\",\n      \"class_name\": \"person\",\n      \"anonymization_tec"
  },
  {
    "path": "references/techniques.md",
    "chars": 2406,
    "preview": "# Add a new technique to the API\n\nIt is mandatory that the techniques you are adding are actually implemented\nThese are "
  },
  {
    "path": "src/main/APIClient.py",
    "chars": 3700,
    "preview": "import os\nimport time\nimport io\nimport sys\nimport json\nimport requests\nimport jsonschema\nfrom exceptions import InvalidU"
  },
  {
    "path": "src/main/ConfigurationSchema.json",
    "chars": 1078,
    "preview": "{\n    \"type\": \"object\",\n    \"properties\": \n    {\n      \"sensitive_info\": \n      {\n        \"type\": \"array\",\n        \"item"
  },
  {
    "path": "src/main/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/main/anonymization/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/main/anonymization/base_anonymization.py",
    "chars": 489,
    "preview": "from abc import ABC, abstractmethod\n\n\nclass BaseAnonymization(ABC):\n    \"\"\"\n    Base anonymization class for the detecti"
  },
  {
    "path": "src/main/anonymization/detection_anonymization.py",
    "chars": 2770,
    "preview": "from anonymization.base_anonymization import BaseAnonymization\nfrom PIL import ImageFilter, Image\n\n\ndef find_boxes(bbox)"
  },
  {
    "path": "src/main/anonymization/segmentation_anonymization.py",
    "chars": 4746,
    "preview": "from anonymization.base_anonymization import BaseAnonymization\nimport os\nfrom PIL import ImageFilter, Image\nimport numpy"
  },
  {
    "path": "src/main/anonymization_service.py",
    "chars": 8003,
    "preview": "import io\nimport os\nimport cv2\nimport sys\nimport numpy as np\nfrom PIL import Image\nfrom io import BytesIO\nfrom datetime "
  },
  {
    "path": "src/main/anonymized_video/.gitignore",
    "chars": 71,
    "preview": "# Ignore everything in this directory\n*\n# Except this file\n!.gitignore\n"
  },
  {
    "path": "src/main/config.py",
    "chars": 15,
    "preview": "master_dict={}\n"
  },
  {
    "path": "src/main/exceptions.py",
    "chars": 1258,
    "preview": "__metaclass__ = type\n\n\nclass ApplicationError(Exception):\n    \"\"\"Base class for other exceptions\"\"\"\n\n    def __init__(se"
  },
  {
    "path": "src/main/helpers.py",
    "chars": 13008,
    "preview": "import io\nimport cv2\nimport json\nimport config\nimport requests\nimport jsonschema\nimport numpy as np\nfrom PIL import Imag"
  },
  {
    "path": "src/main/labels.py",
    "chars": 4005,
    "preview": "import os\nimport sys\nimport json\nfrom APIClient import ApiClient\n\nsys.path.append(\"supported_methods\")\n\n\ndef labels_meth"
  },
  {
    "path": "src/main/models.py",
    "chars": 451,
    "preview": "class ApiResponse:\n\n    def __init__(self, success=True, data=None, error=None):\n        \"\"\"\n        Defines the respons"
  },
  {
    "path": "src/main/start.py",
    "chars": 5243,
    "preview": "import io\nimport cv2\nfrom models import ApiResponse\nfrom labels import labels_methods\nfrom exceptions import Application"
  },
  {
    "path": "src/main/strategy_context.py",
    "chars": 830,
    "preview": "from anonymization.base_anonymization import BaseAnonymization\n\n\nclass StrategyContext:\n    def __init__(self):\n        "
  },
  {
    "path": "src/main/supported_methods/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/main/supported_methods/common_labels.py",
    "chars": 299,
    "preview": "class CommonLabels:\n    def __init__(self):\n        self.blackening = None\n        self.pixelating = None\n        self.b"
  },
  {
    "path": "src/main/urlConfigurationSchema",
    "chars": 218,
    "preview": "{\n    \"type\": \"object\",\n    \"required\": [\n        \"urls\"\n    ],\n    \"properties\": {\n        \"urls\": {\n            \"type\""
  },
  {
    "path": "testing_script/test.py",
    "chars": 604,
    "preview": "import os\nimport sys\nimport time\nimport requests\n\nstart_time = time.time()\n\nurl = \"http://ip:port/anonymize/\"\nj = 0\nlist"
  },
  {
    "path": "testing_script/user_configuration.json",
    "chars": 216,
    "preview": "{\n  \"sensitive_info\": [\n    {\n      \"model_name\": \"yolo\",\n      \"class_name\": \"person\",\n      \"anonymization_technique\":"
  },
  {
    "path": "url_for_openvino_compose/url_configuration.json",
    "chars": 88,
    "preview": "{\"urls\": [\"http://openvino_detection_api:80/\", \"http://openvino_segmentation_api:80/\"]}\n"
  }
]

About this extraction

This page contains the full source code of the BMW-InnovationLab/BMW-Anonymization-API GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 33 files (81.8 KB), approximately 19.7k tokens, and a symbol index with 67 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!