[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Gerasimov Maxim\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# CLIP-ONNX\nIt is a simple library to speed up CLIP inference up to 3x (K80 GPU)!\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/readme_example.ipynb)\nOpen AI CLIP\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/RuCLIP_onnx_example.ipynb)\nRuCLIP Example\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/ru_CLIP_tiny_onnx.ipynb)\nRuCLIP tiny Example\n\n## Usage\nInstall clip-onnx module and requirements first. Use this trick\n```python3\n!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\n!pip install git+https://github.com/openai/CLIP.git\n!pip install onnxruntime-gpu\n```\n## Example in 3 steps\n0. Download CLIP image from repo\n```python3\n!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\n```\n1. Load standard CLIP model, image, text on cpu\n```python3\nimport clip\nfrom PIL import Image\nimport numpy as np\n\n# onnx cannot work with cuda\nmodel, preprocess = clip.load(\"ViT-B/32\", device=\"cpu\", jit=False)\n\n# batch first\nimage = preprocess(Image.open(\"CLIP.png\")).unsqueeze(0).cpu() # [1, 3, 224, 224]\nimage_onnx = image.detach().cpu().numpy().astype(np.float32)\n\n# batch first\ntext = clip.tokenize([\"a diagram\", \"a dog\", \"a cat\"]).cpu() # [3, 77]\ntext_onnx = text.detach().cpu().numpy().astype(np.int32)\n```\n2. Create CLIP-ONNX object to convert model to onnx\n```python3\nfrom clip_onnx import clip_onnx\n\nvisual_path = \"clip_visual.onnx\"\ntextual_path = \"clip_textual.onnx\"\n\nonnx_model = clip_onnx(model, visual_path=visual_path, textual_path=textual_path)\nonnx_model.convert2onnx(image, text, verbose=True)\n# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\nonnx_model.start_sessions(providers=[\"CPUExecutionProvider\"]) # cpu mode\n```\n3. Use for standard CLIP API. Batch inference\n```python3\nimage_features = onnx_model.encode_image(image_onnx)\ntext_features = onnx_model.encode_text(text_onnx)\n\nlogits_per_image, logits_per_text = onnx_model(image_onnx, text_onnx)\nprobs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()\n\nprint(\"Label probs:\", probs)  # prints: [[0.9927937  0.00421067 0.00299571]]\n```\n\n**Enjoy the speed**\n\n## Load saved model\nExample for ViT-B/32 from Model Zoo\n```python3\n!wget https://clip-as-service.s3.us-east-2.amazonaws.com/models/onnx/ViT-B-32/visual.onnx\n!wget https://clip-as-service.s3.us-east-2.amazonaws.com/models/onnx/ViT-B-32/textual.onnx\n```\n```python3\nonnx_model = clip_onnx(None)\nonnx_model.load_onnx(visual_path=\"visual.onnx\",\n                     textual_path=\"textual.onnx\",\n                     logit_scale=100.0000) # model.logit_scale.exp()\nonnx_model.start_sessions(providers=[\"CPUExecutionProvider\"])\n```\n\n## Model Zoo\nModels of the original CLIP can be found on this [page](https://github.com/jina-ai/clip-as-service/blob/main/server/clip_server/model/clip_onnx.py).\\\nThey are not part of this library but should work correctly.\n\n## If something doesn't work\nIt happens that onnx does not convert the model the first time, in these cases it is worth trying to run it again.\n\nIf it doesn't help, it makes sense to change the export settings.\n\nModel export options in onnx looks like this:\n```python3\nDEFAULT_EXPORT = dict(input_names=['input'], output_names=['output'],\n                      export_params=True, verbose=False, opset_version=12,\n                      do_constant_folding=True,\n                      dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}})\n```\n\nYou can change them pretty easily.\n```python3\nfrom clip_onnx.utils import DEFAULT_EXPORT\n\nDEFAULT_EXPORT[\"opset_version\"] = 15\n```\n\nAlternative option (change only visual or textual):\n```python3\nfrom clip_onnx import clip_onnx\nfrom clip_onnx.utils import DEFAULT_EXPORT\n\nvisual_path = \"clip_visual.onnx\"\ntextual_path = \"clip_textual.onnx\"\n\ntextual_export_params = DEFAULT_EXPORT.copy()\ntextual_export_params[\"dynamic_axes\"] = {'input': {1: 'batch_size'},\n                                         'output': {0: 'batch_size'}}\ntextual_export_params[\"opset_version\"] = 12\n\nTextual = lambda x: x\n\nonnx_model = clip_onnx(model.cpu(), visual_path=visual_path, textual_path=textual_path)\nonnx_model.convert2onnx(dummy_input_image, dummy_input_text, verbose=True,\n                        textual_wrapper=Textual,\n                        textual_export_params=textual_export_params)\n```\n\n## Best practices\nSee [benchmark.md](https://github.com/Lednik7/CLIP-ONNX/tree/main/benchmark.md)\n## Examples\nSee [examples folder](https://github.com/Lednik7/CLIP-ONNX/tree/main/examples) for more details \\\nSome parts of the code were taken from the [post](https://twitter.com/apeoffire/status/1478493291008172038). Thank you [neverix](https://github.com/neverix) for this notebook.\n"
  },
  {
    "path": "benchmark.md",
    "content": "# CPU benchmarks\n#### Run on Intel (R) Xeon (R) CPU @ 2.30 GHz with 2 cores (Google Colab session)\n\n| ONNX     |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.234 |         0.162 |   0.396 |\n| ViT-B/32 |       8 |          0.923 |         0.656 |   1.579 |\n| ViT-B/32 |      16 |          2.079 |         1.288 |   3.367 |\n| ViT-B/32 |      32 |          3.937 |         2.658 |   6.595 |\n| ViT-B/32 |      64 |          7.944 |         5.567 |  13.511 |\n\n| TORCH    |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.343 |         0.243 |   0.586 |\n| ViT-B/32 |       8 |          1.093 |         0.831 |   1.924 |\n| ViT-B/32 |      16 |          1.952 |         1.523 |   3.475 |\n| ViT-B/32 |      32 |          4.079 |         3.015 |   7.094 |\n| ViT-B/32 |      64 |          8.07  |         6.212 |  14.282 |\n\n# GPU benchmarks\n#### Run on NVIDIA Tesla K80 (Google Colab session)\n\n| ONNX     |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.136 |         0.021 |   0.157 |\n| ViT-B/32 |       8 |          0.054 |         0.04  |   0.094 |\n| ViT-B/32 |      16 |          0.089 |         0.071 |   0.16  |\n| ViT-B/32 |      32 |          0.158 |         0.134 |   0.292 |\n| ViT-B/32 |      64 |          0.325 |         0.258 |   0.583 |\n\n| TORCH    |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.02  |         0.035 |   0.055 |\n| ViT-B/32 |       8 |          0.081 |         0.098 |   0.179 |\n| ViT-B/32 |      16 |          0.207 |         0.196 |   0.403 |\n| ViT-B/32 |      32 |          0.44  |         0.374 |   0.814 |\n| ViT-B/32 |      64 |          0.919 |         0.719 |   1.638 |\n\n#### Run on NVIDIA Tesla T4 (Google Colab session)\n\n| ONNX     |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.155 |         0.01  |   0.165 |\n| ViT-B/32 |       8 |          0.032 |         0.014 |   0.046 |\n| ViT-B/32 |      16 |          0.037 |         0.029 |   0.066 |\n| ViT-B/32 |      32 |          0.076 |         0.059 |   0.135 |\n| ViT-B/32 |      64 |          0.169 |         0.117 |   0.286 |\n\n| TORCH    |   batch |   encode_image |   encode_text |   total |\n|:---------|--------:|---------------:|--------------:|--------:|\n| ViT-B/32 |       2 |          0.017 |         0.009 |   0.026 |\n| ViT-B/32 |       8 |          0.008 |         0.008 |   0.016 |\n| ViT-B/32 |      16 |          0.009 |         0.012 |   0.021 |\n| ViT-B/32 |      32 |          0.008 |         0.025 |   0.033 |\n| ViT-B/32 |      64 |          0.009 |         0.049 |   0.058 |\n"
  },
  {
    "path": "clip_onnx/__init__.py",
    "content": "from .clip_converter import clip_converter\nfrom .clip_onnx import clip_onnx\nfrom .utils import Textual, attention\nfrom .benchmark import speed_test\n"
  },
  {
    "path": "clip_onnx/benchmark.py",
    "content": "import time\nimport torch\n\n\ndef speed_test(func, data_gen, n: int = 5, empty_cache: bool = True):\n    if empty_cache:\n        torch.cuda.empty_cache()\n    values = []\n    for _ in range(n):\n        input_data = data_gen()\n        t = time.time()\n        func(input_data)\n        values.append(time.time() - t)\n        if empty_cache:\n            torch.cuda.empty_cache()\n    return sum(values) / n\n"
  },
  {
    "path": "clip_onnx/clip_converter.py",
    "content": "import torch\nimport onnx\nfrom torch import nn\nfrom onnxruntime.quantization import quantize_dynamic, QuantType\nfrom .utils import Textual, DEFAULT_EXPORT\n\n\nclass clip_converter(nn.Module):\n    def __init__(self, model, visual_path: str = \"clip_visual.onnx\",\n                 textual_path: str = \"clip_textual.onnx\"):\n        super().__init__()\n        self.model = model\n        self.visual_path = visual_path\n        self.textual_path = textual_path\n        self.visual_flag = False\n        self.textual_flag = False\n        self.logit_scale = self.model.logit_scale.exp()\n\n        self.model.eval()\n        for x in self.model.parameters():\n            x.requires_grad = False\n\n    def quantization(self, mode: str = \"dynamic\"):\n        assert mode in [\"dynamic\"]\n        if mode == \"dynamic\":\n            model_quant_visual = f\"{self.visual_path}.quant\"\n            quantize_dynamic(self.visual_path,\n                             model_quant_visual,\n                             weight_type=QuantType.QUInt8)\n            self.visual_path = model_quant_visual\n\n            model_quant_textual = f\"{self.textual_path}.quant\"\n            quantize_dynamic(self.textual_path,\n                             model_quant_textual,\n                             weight_type=QuantType.QUInt8)\n            self.textual_path = model_quant_textual\n\n    def torch_export(self, model, dummy_input, path: str, export_params=DEFAULT_EXPORT):\n        torch.onnx.export(model, dummy_input, path, **export_params)\n\n    def onnx_checker(self, path: str):\n        model = onnx.load(path)\n        onnx.checker.check_model(model)\n        del model\n\n    def convert_visual(self, dummy_input, wrapper=lambda x: x,\n                       export_params=DEFAULT_EXPORT):\n        visual = wrapper(self.model.visual)\n        self.torch_export(visual, dummy_input, self.visual_path,\n                          export_params=export_params)\n        self.onnx_checker(self.visual_path)\n\n    def convert_textual(self, dummy_input, wrapper=Textual,\n                        export_params=DEFAULT_EXPORT):\n        textual = wrapper(self.model)\n        self.torch_export(textual, dummy_input, self.textual_path,\n                          export_params=export_params)\n        self.onnx_checker(self.textual_path)\n\n    def convert2onnx(self, visual_input=None, textual_input=None, verbose=True,\n                     visual_wrapper=lambda x: x,\n                     textual_wrapper=Textual,\n                     visual_export_params=DEFAULT_EXPORT,\n                     textual_export_params=DEFAULT_EXPORT):\n        isinstance_visual_input = isinstance(visual_input, (torch.Tensor))\n        isinstance_textual_input = isinstance(textual_input, (torch.Tensor))\n\n        if (not isinstance_visual_input) and (not isinstance_textual_input):\n            raise Exception(\"[CLIP ONNX] Please, choose a dummy input\")\n        elif not isinstance_visual_input:\n            print(\"[CLIP ONNX] Convert only textual model\")\n        elif not isinstance_textual_input:\n            print(\"[CLIP ONNX] Convert only visual model\")\n\n        if isinstance_visual_input:\n            self.visual_flag = True\n            if verbose:\n                print(\"[CLIP ONNX] Start convert visual model\")\n            self.convert_visual(visual_input, visual_wrapper, visual_export_params)\n            if verbose:\n                print(\"[CLIP ONNX] Start check visual model\")\n            self.onnx_checker(self.visual_path)\n\n        if isinstance_textual_input:\n            self.textual_flag = True\n            if verbose:\n                print(\"[CLIP ONNX] Start convert textual model\")\n            self.convert_textual(textual_input, textual_wrapper, textual_export_params)\n            if verbose:\n                print(\"[CLIP ONNX] Start check textual model\")\n            self.onnx_checker(self.textual_path)\n\n        if verbose:\n            print(\"[CLIP ONNX] Models converts successfully\")\n"
  },
  {
    "path": "clip_onnx/clip_onnx.py",
    "content": "from .clip_converter import clip_converter\nimport torch\nimport onnxruntime\n\n\nclass clip_onnx(clip_converter):\n    def __init__(self, model=None,\n                 visual_path: str = \"clip_visual.onnx\",\n                 textual_path: str = \"clip_textual.onnx\"):\n        if not isinstance(model, (type(None))):\n            super().__init__(model, visual_path, textual_path)\n        else:\n            print(\"[CLIP ONNX] Load mode\")\n\n    def load_onnx(self, visual_path=None, textual_path=None, logit_scale=None):\n        if visual_path and textual_path:\n            if not logit_scale:\n                raise Exception(\"For this mode logit_scale must be specified. Example: model.logit_scale.exp()\")\n            self.logit_scale = logit_scale\n        if visual_path:\n            self.visual_path = visual_path\n            self.visual_flag = True\n        if textual_path:\n            self.textual_path = textual_path\n            self.textual_flag = True\n\n    def start_sessions(self, providers=['TensorrtExecutionProvider',\n                                        'CUDAExecutionProvider',\n                                        'CPUExecutionProvider']):\n        if self.visual_flag:\n            self.visual_session = onnxruntime.InferenceSession(self.visual_path,\n                                                               providers=providers)\n        if self.textual_flag:\n            self.textual_session = onnxruntime.InferenceSession(self.textual_path,\n                                                                providers=providers)\n\n    def visual_run(self, onnx_image):\n        onnx_input_image = {self.visual_session.get_inputs()[0].name: onnx_image}\n        visual_output, = self.visual_session.run(None, onnx_input_image)\n        return visual_output\n\n    def textual_run(self, onnx_text):\n        onnx_input_text = {self.textual_session.get_inputs()[0].name: onnx_text}\n        textual_output, = self.textual_session.run(None, onnx_input_text)\n        return textual_output\n\n    def __call__(self, image, text, device: str = \"cpu\"):\n        assert self.visual_flag and self.textual_flag\n        image_features = torch.from_numpy(self.visual_run(image)).to(device)\n        text_features = torch.from_numpy(self.textual_run(text)).to(device)\n\n        # normalized features\n        image_features = image_features / image_features.norm(dim=-1, keepdim=True)\n        text_features = text_features / text_features.norm(dim=-1, keepdim=True)\n\n        # cosine similarity as logits\n        logits_per_image = self.logit_scale * image_features @ text_features.t()\n        logits_per_text = logits_per_image.t()\n\n        # shape = [global_batch_size, global_batch_size]\n        return logits_per_image, logits_per_text\n\n    def encode_image(self, image):\n        return self.visual_run(image)\n\n    def encode_text(self, text):\n        return self.textual_run(text)\n"
  },
  {
    "path": "clip_onnx/utils.py",
    "content": "import torch.nn.functional as F\nimport torch\nfrom torch import nn\n\n\nclass Textual(nn.Module):\n    def __init__(self, model):\n        super().__init__()\n        self.transformer = model.transformer\n        self.positional_embedding = model.positional_embedding\n        self.transformer = model.transformer\n        self.ln_final = model.ln_final\n        self.text_projection = model.text_projection\n        self.token_embedding = model.token_embedding\n\n    def forward(self, text):\n        x = self.token_embedding(text)  # [batch_size, n_ctx, d_model]\n\n        x = x + self.positional_embedding\n        x = x.permute(1, 0, 2)  # NLD -> LND\n        x = self.transformer(x)\n        x = x.permute(1, 0, 2)  # LND -> NLD\n        x = self.ln_final(x)\n\n        # x.shape = [batch_size, n_ctx, transformer.width]\n        # take features from the eot embedding (eot_token is the highest number in each sequence)\n        # needs .float() before .argmax(  ) to work\n        x = x[torch.arange(x.shape[0]), text.float().argmax(dim=-1)] @ self.text_projection\n\n        return x\n\n\ndef attention(self, x: torch.Tensor):\n    # onnx doesn't like multi_head_attention_forward so this is a reimplementation\n    self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None\n    q, k, v = (torch.einsum(\"tbh, oh -> tbo\", x, self.attn.in_proj_weight) + self.attn.in_proj_bias).contiguous().chunk(\n        3, dim=-1)\n    tgt_len = q.shape[0]\n    bsz = q.shape[1]\n    num_heads = self.attn.num_heads\n    head_dim = q.shape[2] // num_heads\n    attn_output = scaled_dot_product_attention(\n        q.reshape(tgt_len, bsz * num_heads, head_dim).transpose(0, 1),\n        k.reshape(tgt_len, bsz * num_heads, head_dim).transpose(0, 1),\n        v.reshape(tgt_len, bsz * num_heads, head_dim).transpose(0, 1), self.attn_mask, 0.0\n    )\n    attn_output = attn_output.transpose(0, 1).contiguous().view(q.shape)\n    attn_output = F.linear(attn_output, self.attn.out_proj.weight, self.attn.out_proj.bias)\n    return attn_output\n\ndef scaled_dot_product_attention(Q, K, V, attn_mask, dropout_p):\n    if attn_mask is None:\n        attn_weight = torch.softmax(Q @ K.transpose(-2, -1) / Q.size(-1)**0.5, dim=-1)\n    else:\n        attn_weight = torch.softmax(Q @ K.transpose(-2, -1) / Q.size(-1)**0.5 + attn_mask[None, ...], dim=-1)\n    # attn_weight = torch.dropout(attn_weight, dropout_p) # this is always 0.0 in CLIP so I comment it out.\n    return attn_weight @ V\n\n\nDEFAULT_EXPORT = dict(input_names=['input'], output_names=['output'],\n                      export_params=True, verbose=False, opset_version=12,\n                      do_constant_folding=True,\n                      dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}})\n"
  },
  {
    "path": "examples/RuCLIP_onnx_example.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"RuCLIP_onnx_example.ipynb\",\n      \"provenance\": [],\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/RuCLIP_onnx_example.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title Allowed Resources\\n\",\n        \"import multiprocessing\\n\",\n        \"import torch\\n\",\n        \"from psutil import virtual_memory\\n\",\n        \"\\n\",\n        \"ram_gb = round(virtual_memory().total / 1024**3, 1)\\n\",\n        \"\\n\",\n        \"print('CPU:', multiprocessing.cpu_count())\\n\",\n        \"print('RAM GB:', ram_gb)\\n\",\n        \"print(\\\"PyTorch version:\\\", torch.__version__)\\n\",\n        \"print(\\\"CUDA version:\\\", torch.version.cuda)\\n\",\n        \"print(\\\"cuDNN version:\\\", torch.backends.cudnn.version())\\n\",\n        \"device = torch.device(\\\"cuda:0\\\" if torch.cuda.is_available() else \\\"cpu\\\")\\n\",\n        \"print(\\\"device:\\\", device.type)\\n\",\n        \"\\n\",\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"cellView\": \"form\",\n        \"id\": \"4gfq46gnYcnU\",\n        \"outputId\": \"41e2054a-e2e4-4bb5-ed39-8bd8bfc639c3\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"CPU: 2\\n\",\n            \"RAM GB: 12.7\\n\",\n            \"PyTorch version: 1.10.0+cu111\\n\",\n            \"CUDA version: 11.1\\n\",\n            \"cuDNN version: 8005\\n\",\n            \"device: cuda\\n\",\n            \"Wed Jan 19 22:10:10 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 495.46       Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   41C    P8     9W /  70W |      3MiB / 15109MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"whlsBiJgR8le\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\\n\",\n        \"!pip install ruclip==0.0.1rc7\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ],\n      \"metadata\": {\n        \"id\": \"HnbpAkvuR73L\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"tqy0zKM4R-7M\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"\\n\",\n        \"# priority device (if available)\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"x8IN72OnSAIh\",\n        \"outputId\": \"3174cf2c-ace3-4e1f-a550-e16c72302d51\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## RuCLIP\\n\",\n        \"WARNING: specific RuCLIP like forward \\\"model(text, image)\\\" instead of classic(OpenAI CLIP) \\\"model(image, text)\\\"\"\n      ],\n      \"metadata\": {\n        \"id\": \"8_wSsSheT5mw\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import warnings\\n\",\n        \"\\n\",\n        \"warnings.filterwarnings(\\\"ignore\\\", category=UserWarning)\"\n      ],\n      \"metadata\": {\n        \"id\": \"gZTxanR26knr\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import ruclip\\n\",\n        \"\\n\",\n        \"# onnx cannot export with cuda\\n\",\n        \"model, processor = ruclip.load(\\\"ruclip-vit-base-patch32-384\\\", device=\\\"cpu\\\")\"\n      ],\n      \"metadata\": {\n        \"id\": \"FdTLuqsJUBFY\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# simple input\\n\",\n        \"pil_images = [Image.open(\\\"CLIP.png\\\")]\\n\",\n        \"labels = ['диаграмма', 'собака', 'кошка']\\n\",\n        \"dummy_input = processor(text=labels, images=pil_images,\\n\",\n        \"                        return_tensors='pt', padding=True)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = dummy_input[\\\"pixel_values\\\"] # torch tensor [1, 3, 384, 384]\\n\",\n        \"image_onnx = dummy_input[\\\"pixel_values\\\"].cpu().detach().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = dummy_input[\\\"input_ids\\\"] # torch tensor [3, 77]\\n\",\n        \"text_onnx = dummy_input[\\\"input_ids\\\"].cpu().detach().numpy()[::-1].astype(np.int64)\"\n      ],\n      \"metadata\": {\n        \"id\": \"rPwc6A2SSGyl\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#RuCLIP output\\n\",\n        \"logits_per_image, logits_per_text = model(text, image)\\n\",\n        \"probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()\\n\",\n        \"\\n\",\n        \"print(\\\"Label probs:\\\", probs)  # prints: [[0.9885839  0.00894288 0.0024732 ]]\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"pv0mH626SdzO\",\n        \"outputId\": \"d563462f-b2a9-4d49-b491-17e88ffa81f0\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Label probs: [[0.9885839  0.00894288 0.0024732 ]]\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Convert RuCLIP model to ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"R_e5OjJeXRiF\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx\\n\",\n        \"\\n\",\n        \"visual_path = \\\"clip_visual.onnx\\\"\\n\",\n        \"textual_path = \\\"clip_textual.onnx\\\"\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model, visual_path=visual_path, textual_path=textual_path)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"oYM5FDSGSJBW\",\n        \"outputId\": \"c647dc2e-946d-4769-c66e-77edfa98237f\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\",\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\",\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [ONNX] CPU inference mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"U1Pr-YTtSEhs\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # cpu mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"aY9wRe5kT3wG\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"image_features = onnx_model.encode_image(image_onnx)\\n\",\n        \"text_features = onnx_model.encode_text(text_onnx)\\n\",\n        \"\\n\",\n        \"logits_per_image, logits_per_text = onnx_model(image_onnx, text_onnx)\\n\",\n        \"probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()\\n\",\n        \"\\n\",\n        \"print(\\\"Label probs:\\\", probs)  # prints: Label probs: [[0.90831375 0.07174418 0.01994203]]\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"tYVuk72nSLw6\",\n        \"outputId\": \"75bf3803-6ed7-4516-ccd0-42f9cf7f22e0\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Label probs: [[0.90831375 0.07174418 0.01994203]]\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model.encode_text(text_onnx) # text representation\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"Bpu4_HFRVeNk\",\n        \"outputId\": \"e8f1681b-40dc-495f-d382-f0348d87c412\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"1 loop, best of 5: 285 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model.encode_image(image_onnx) # image representation\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"JsOccP2gVmpo\",\n        \"outputId\": \"adb33860-b000-461b-959f-95126e2ac049\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"1 loop, best of 5: 412 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [ONNX] GPU inference mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"Zww0E-jIULug\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # cuda mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"PBakYeiQUOAm\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model.encode_text(text_onnx) # text representation\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"EjvRBvCaWJBL\",\n        \"outputId\": \"07426652-1cc5-4713-c355-fb4f1bd138d4\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"The slowest run took 5.07 times longer than the fastest. This could mean that an intermediate result is being cached.\\n\",\n            \"100 loops, best of 5: 6.89 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model.encode_image(image_onnx) # image representation\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"pmu4mQCsWJ8w\",\n        \"outputId\": \"5cb45026-dfd3-419d-e5d3-f5d0d9681cd0\"\n      },\n      \"execution_count\": 12,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"The slowest run took 699.84 times longer than the fastest. This could mean that an intermediate result is being cached.\\n\",\n            \"1 loop, best of 5: 18.9 ms per loop\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/clip_onnx_example.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"clip_onnx_example.ipynb\",\n      \"provenance\": [],\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/clip_onnx_example.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"fxPg_VvZuScV\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 1,\n      \"metadata\": {\n        \"id\": \"al_QNjyFq6Jj\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"42eeJz9lTdJ6\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XuauIZIBSEUX\",\n        \"outputId\": \"2c7c2bd9-90dd-4b1a-e98a-79e1f2218644\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Thu Jan  6 16:36:44 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 495.44       Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   35C    P8    26W / 149W |      0MiB / 11441MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"id\": \"gqvxpdajRX5_\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## CPU inference mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"010k-ksVTjAu\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"KdTz0IJWVBqE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0).cpu() # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]).cpu() # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int64)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9ROPwKYurOhP\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit model(image, text)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"1CrHQ8cYt8Cx\",\n        \"outputId\": \"4d98f85d-4b02-4ae2-b18f-fb3c7a2d6caf\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"1 loop, best of 5: 636 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ao2MriaVVG6Y\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx, attention\\n\",\n        \"clip.model.ResidualAttentionBlock.attention = attention\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # cpu mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"nSeG9uAZrcph\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"8c394684-d78e-49f6-a60f-872485d5f650\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/clip_onnx/utils.py:40: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\\n\",\n            \"  head_dim = q.shape[2] // num_heads\\n\",\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_helper.py:716: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\n\",\n            \"  warnings.warn(\\\"allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:2819: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\\n\",\n            \"  \\\"If indices include negative values, the exported graph will produce incorrect results.\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model(image_onnx, text_onnx)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"B15dr51UrvMh\",\n        \"outputId\": \"7c5fbc64-61f5-4742-d5a1-24d123971515\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"1 loop, best of 5: 550 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## GPU inference mode\\n\",\n        \"Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ahh_7CkTUb8y\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"B6M7yq7qceb5\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"6LtPSZhfUd_m\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers() # optional\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"xE0VGt9sQwrf\",\n        \"outputId\": \"6feb4701-7b7f-437e-dc2f-c95c504dbb89\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CUDAExecutionProvider', 'CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 7\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model(image_onnx, text_onnx)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"iPUVzqmgcYas\",\n        \"outputId\": \"3e7c1526-6e38-4982-ca36-eabfc95c2ab9\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"The slowest run took 79.70 times longer than the fastest. This could mean that an intermediate result is being cached.\\n\",\n            \"1 loop, best of 5: 60.8 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"jb58mrkbch2V\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"\\n\",\n        \"device = \\\"cuda\\\"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=device, jit=False)\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0).to(device) # [1, 3, 224, 224]\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]).to(device) # [3, 77]\"\n      ],\n      \"metadata\": {\n        \"id\": \"gidR99GOckyF\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit model(image, text)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XpBrtjlOcwOC\",\n        \"outputId\": \"56375401-18a0-499b-f29b-c6e2d4d07e42\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"10 loops, best of 5: 72.2 ms per loop\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/dev/clip_onnx_benchmark_cpu.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"clip-onnx-benchmark-cpu.ipynb\",\n      \"provenance\": [],\n      \"authorship_tag\": \"ABX9TyNUvpypuYYk54s1lZecP8Pf\",\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/dev/examples/dev/clip_onnx_benchmark_cpu.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"fxPg_VvZuScV\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 1,\n      \"metadata\": {\n        \"id\": \"al_QNjyFq6Jj\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git@dev\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"42eeJz9lTdJ6\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XuauIZIBSEUX\",\n        \"outputId\": \"7e3fa9a5-2970-4bc1-81e5-9ec997a267a1\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Tue May  3 06:56:57 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   47C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"gqvxpdajRX5_\",\n        \"outputId\": \"4ad23904-186a-4e19-af9a-66538a70a3c8\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## GPU inference mode\\n\",\n        \"Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"010k-ksVTjAu\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"KdTz0IJWVBqE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0)  # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]) # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int32)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9ROPwKYurOhP\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ao2MriaVVG6Y\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"nSeG9uAZrcph\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"32e7fb6e-191a-4c3a-a8be-42ddf41ee62d\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\",\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:2909: UserWarning: Exporting aten::index operator of advanced indexing in opset 12 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\\n\",\n            \"  \\\"If indices include negative values, the exported graph will produce incorrect results.\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.load_onnx(\\\"/content/clip_visual.onnx\\\",\\n\",\n        \"                     \\\"/content/clip_textual.onnx\\\",\\n\",\n        \"                     model.logit_scale.exp())\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"PsDS7ty79zZf\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers()\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"aZsGJNrbNCYe\",\n        \"outputId\": \"27eec69c-6535-46e1-d98a-15836459149e\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 8\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Benchmark\"\n      ],\n      \"metadata\": {\n        \"id\": \"J5IcOG_6jAFz\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\"\n      ],\n      \"metadata\": {\n        \"id\": \"SJ_5_x7vLepK\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model.eval()\\n\",\n        \"for x in model.parameters():\\n\",\n        \"    x.requires_grad = False\"\n      ],\n      \"metadata\": {\n        \"id\": \"OnOzZ3LMuubW\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import numpy, random, torch\"\n      ],\n      \"metadata\": {\n        \"id\": \"wDwqRRrTGKUS\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"def set_seed():\\n\",\n        \"    torch.manual_seed(12)\\n\",\n        \"    torch.cuda.manual_seed(12)\\n\",\n        \"    np.random.seed(12)\\n\",\n        \"    random.seed(12)\\n\",\n        \"\\n\",\n        \"    torch.backends.cudnn.deterministic=True\"\n      ],\n      \"metadata\": {\n        \"id\": \"9H17n_6gGJgT\"\n      },\n      \"execution_count\": 12,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import torch\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"n = 5\\n\",\n        \"clip_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"onnx_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"for batch in [2, 8, 16, 32, 64]:\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        image_input = torch.randint(1, 255, (batch, 3, 224, 224))\\n\",\n        \"        image_input_onnx = image_input.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_image(image_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            image_input = torch.randint(1, 255, (batch, 3, 224, 224))\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_image(image_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        text_input = torch.randint(320, 49407, (batch, 77))\\n\",\n        \"        text_input_onnx = text_input.detach().cpu().numpy().astype(np.int32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_text(text_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            text_input = torch.randint(320, 49407, (batch, 77))\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_text(text_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    print(\\\"-\\\" * 78)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"4lFL6tzWjiWL\",\n        \"outputId\": \"45819718-619e-429c-9aa4-7e28b068b9a3\"\n      },\n      \"execution_count\": 13,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"onnx 2 encode_image 0.234\\n\",\n            \"torch 2 encode_image 0.343\\n\",\n            \"onnx 2 encode_text 0.162\\n\",\n            \"torch 2 encode_text 0.243\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 8 encode_image 0.923\\n\",\n            \"torch 8 encode_image 1.093\\n\",\n            \"onnx 8 encode_text 0.656\\n\",\n            \"torch 8 encode_text 0.831\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 16 encode_image 2.079\\n\",\n            \"torch 16 encode_image 1.952\\n\",\n            \"onnx 16 encode_text 1.288\\n\",\n            \"torch 16 encode_text 1.523\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 32 encode_image 3.937\\n\",\n            \"torch 32 encode_image 4.079\\n\",\n            \"onnx 32 encode_text 2.658\\n\",\n            \"torch 32 encode_text 3.015\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 64 encode_image 7.944\\n\",\n            \"torch 64 encode_image 8.07\\n\",\n            \"onnx 64 encode_text 5.567\\n\",\n            \"torch 64 encode_text 6.212\\n\",\n            \"------------------------------------------------------------------------------\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import pandas as pd\"\n      ],\n      \"metadata\": {\n        \"id\": \"P2YhbE9v_4ci\"\n      },\n      \"execution_count\": 14,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"pd.DataFrame({\\\"backend\\\": [\\\"onnx\\\", \\\"torch\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64],\\n\",\n        \"              \\\"encode_image\\\": [j[1] for i in zip(onnx_results[\\\"encode_image\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_image\\\"]) for j in i],\\n\",\n        \"              \\\"encode_text\\\": [j[1] for i in zip(onnx_results[\\\"encode_text\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_text\\\"]) for j in i]})\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 362\n        },\n        \"id\": \"WfZfDk4PAlqm\",\n        \"outputId\": \"38710ad6-09ae-4c48-fc20-1cdabf4c2a50\"\n      },\n      \"execution_count\": 15,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"  backend  batch  encode_image  encode_text\\n\",\n              \"0    onnx      2         0.234        0.162\\n\",\n              \"1   torch      2         0.343        0.243\\n\",\n              \"2    onnx      8         0.923        0.656\\n\",\n              \"3   torch      8         1.093        0.831\\n\",\n              \"4    onnx     16         2.079        1.288\\n\",\n              \"5   torch     16         1.952        1.523\\n\",\n              \"6    onnx     32         3.937        2.658\\n\",\n              \"7   torch     32         4.079        3.015\\n\",\n              \"8    onnx     64         7.944        5.567\\n\",\n              \"9   torch     64         8.070        6.212\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-e4f91703-fb85-4559-be94-d5ff4e38a360\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>backend</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.234</td>\\n\",\n              \"      <td>0.162</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.343</td>\\n\",\n              \"      <td>0.243</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.923</td>\\n\",\n              \"      <td>0.656</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>1.093</td>\\n\",\n              \"      <td>0.831</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>2.079</td>\\n\",\n              \"      <td>1.288</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>5</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>1.952</td>\\n\",\n              \"      <td>1.523</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>6</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>3.937</td>\\n\",\n              \"      <td>2.658</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>7</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>4.079</td>\\n\",\n              \"      <td>3.015</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>8</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>7.944</td>\\n\",\n              \"      <td>5.567</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>9</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>8.070</td>\\n\",\n              \"      <td>6.212</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-e4f91703-fb85-4559-be94-d5ff4e38a360')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-e4f91703-fb85-4559-be94-d5ff4e38a360 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-e4f91703-fb85-4559-be94-d5ff4e38a360');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 15\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df = pd.DataFrame({\\\"ONNX\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in onnx_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in onnx_results[\\\"encode_text\\\"]]})\\n\",\n        \"onnx_df[\\\"total\\\"] = onnx_df[\\\"encode_image\\\"] + onnx_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"Xpw9lV7yBbA8\"\n      },\n      \"execution_count\": 16,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 206\n        },\n        \"id\": \"LItAyQkeDhnQ\",\n        \"outputId\": \"37517a71-baf3-494c-8a46-9f05cbfb7d32\"\n      },\n      \"execution_count\": 17,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"       ONNX  batch  encode_image  encode_text   total\\n\",\n              \"0  ViT-B/32      2         0.234        0.162   0.396\\n\",\n              \"1  ViT-B/32      8         0.923        0.656   1.579\\n\",\n              \"2  ViT-B/32     16         2.079        1.288   3.367\\n\",\n              \"3  ViT-B/32     32         3.937        2.658   6.595\\n\",\n              \"4  ViT-B/32     64         7.944        5.567  13.511\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-93a4fa7a-32c4-4c2d-803e-5e150f825186\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>ONNX</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"      <th>total</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.234</td>\\n\",\n              \"      <td>0.162</td>\\n\",\n              \"      <td>0.396</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.923</td>\\n\",\n              \"      <td>0.656</td>\\n\",\n              \"      <td>1.579</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>2.079</td>\\n\",\n              \"      <td>1.288</td>\\n\",\n              \"      <td>3.367</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>3.937</td>\\n\",\n              \"      <td>2.658</td>\\n\",\n              \"      <td>6.595</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>7.944</td>\\n\",\n              \"      <td>5.567</td>\\n\",\n              \"      <td>13.511</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-93a4fa7a-32c4-4c2d-803e-5e150f825186')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-93a4fa7a-32c4-4c2d-803e-5e150f825186 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-93a4fa7a-32c4-4c2d-803e-5e150f825186');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 17\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(onnx_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"AIQDA9FaJZ7Y\",\n        \"outputId\": \"8e8d4109-822e-4328-b2ca-66d4b9a19f8d\"\n      },\n      \"execution_count\": 18,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| ONNX     |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.234 |         0.162 |   0.396 |\\n\",\n            \"| ViT-B/32 |       8 |          0.923 |         0.656 |   1.579 |\\n\",\n            \"| ViT-B/32 |      16 |          2.079 |         1.288 |   3.367 |\\n\",\n            \"| ViT-B/32 |      32 |          3.937 |         2.658 |   6.595 |\\n\",\n            \"| ViT-B/32 |      64 |          7.944 |         5.567 |  13.511 |\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"clip_df = pd.DataFrame({\\\"TORCH\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in clip_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in clip_results[\\\"encode_text\\\"]]})\\n\",\n        \"clip_df[\\\"total\\\"] = clip_df[\\\"encode_image\\\"] + clip_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"E1OXQUDvDZmI\"\n      },\n      \"execution_count\": 19,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(clip_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"xAj-ynhCDpPO\",\n        \"outputId\": \"88243c7f-bd6d-4a63-9ee2-154440c3df7e\"\n      },\n      \"execution_count\": 20,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| TORCH    |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.343 |         0.243 |   0.586 |\\n\",\n            \"| ViT-B/32 |       8 |          1.093 |         0.831 |   1.924 |\\n\",\n            \"| ViT-B/32 |      16 |          1.952 |         1.523 |   3.475 |\\n\",\n            \"| ViT-B/32 |      32 |          4.079 |         3.015 |   7.094 |\\n\",\n            \"| ViT-B/32 |      64 |          8.07  |         6.212 |  14.282 |\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/dev/clip_onnx_benchmark_gpu.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"clip-onnx-benchmark-gpu.ipynb\",\n      \"provenance\": []\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"fxPg_VvZuScV\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 1,\n      \"metadata\": {\n        \"id\": \"al_QNjyFq6Jj\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"42eeJz9lTdJ6\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XuauIZIBSEUX\",\n        \"outputId\": \"7e2b352b-751e-439e-bb3d-4e1323e2e44d\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Thu Jan  6 15:47:04 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 495.44       Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   34C    P8    28W / 149W |      0MiB / 11441MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"gqvxpdajRX5_\",\n        \"outputId\": \"7c44b4e1-d916-42d9-cc61-52efdf0fa9a9\"\n      },\n      \"execution_count\": 20,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## GPU inference mode\\n\",\n        \"Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"010k-ksVTjAu\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"KdTz0IJWVBqE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0) # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]) # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int64)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9ROPwKYurOhP\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ao2MriaVVG6Y\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx, attention\\n\",\n        \"clip.model.ResidualAttentionBlock.attention = attention\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=False)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"nSeG9uAZrcph\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"25e07d68-6ef2-44c4-d144-c43b611f3316\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/clip_onnx/utils.py:40: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\\n\",\n            \"  head_dim = q.shape[2] // num_heads\\n\",\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_helper.py:716: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\n\",\n            \"  warnings.warn(\\\"allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\\")\\n\",\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:2819: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\\n\",\n            \"  \\\"If indices include negative values, the exported graph will produce incorrect results.\\\")\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx, attention\\n\",\n        \"clip.model.ResidualAttentionBlock.attention = attention\"\n      ],\n      \"metadata\": {\n        \"id\": \"imMVbHFO-KSH\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.load_onnx(\\\"/content/clip_visual.onnx\\\",\\n\",\n        \"                     \\\"/content/clip_textual.onnx\\\",\\n\",\n        \"                     model.logit_scale.exp())\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"PsDS7ty79zZf\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers()\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"aZsGJNrbNCYe\",\n        \"outputId\": \"9dcdd2d6-2a73-4dad-9ea7-c2892273c631\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CUDAExecutionProvider', 'CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 4\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Benchmark\"\n      ],\n      \"metadata\": {\n        \"id\": \"J5IcOG_6jAFz\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cuda\\\", jit=False)\"\n      ],\n      \"metadata\": {\n        \"id\": \"SJ_5_x7vLepK\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model.eval()\\n\",\n        \"for x in model.parameters():\\n\",\n        \"    x.requires_grad = False\"\n      ],\n      \"metadata\": {\n        \"id\": \"OnOzZ3LMuubW\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import numpy, random, torch\"\n      ],\n      \"metadata\": {\n        \"id\": \"wDwqRRrTGKUS\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"def set_seed():\\n\",\n        \"    torch.manual_seed(12)\\n\",\n        \"    torch.cuda.manual_seed(12)\\n\",\n        \"    np.random.seed(12)\\n\",\n        \"    random.seed(12)\\n\",\n        \"\\n\",\n        \"    torch.backends.cudnn.deterministic=True\"\n      ],\n      \"metadata\": {\n        \"id\": \"9H17n_6gGJgT\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%timeit onnx_model.encode_image(image_onnx)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"IsJ2TsBRNh8f\",\n        \"outputId\": \"bb642ee7-0112-4195-be35-14fdf719e7bc\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"The slowest run took 23.27 times longer than the fastest. This could mean that an intermediate result is being cached.\\n\",\n            \"1 loop, best of 5: 20.1 ms per loop\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import torch\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"n = 5\\n\",\n        \"clip_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"onnx_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"for batch in [2, 8, 16, 32, 64]:\\n\",\n        \"    set_seed()\\n\",\n        \"    image_input = torch.randint(1, 255, (batch, 3, 224, 224)).cuda()\\n\",\n        \"    text_input = torch.randint(320, 49407, (batch, 77)).cuda()\\n\",\n        \"    image_input_onnx = image_input.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"    text_input_onnx = text_input.detach().cpu().numpy().astype(np.int64)\\n\",\n        \"\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_image(image_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_image(image_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_text(text_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_text(text_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    print(\\\"-\\\" * 78)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"4lFL6tzWjiWL\",\n        \"outputId\": \"a209b78a-fe78-4b46-9220-4b9624a1568f\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"onnx 2 encode_image 0.073\\n\",\n            \"torch 2 encode_image 0.041\\n\",\n            \"onnx 2 encode_text 0.032\\n\",\n            \"torch 2 encode_text 0.033\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 8 encode_image 0.088\\n\",\n            \"torch 8 encode_image 0.128\\n\",\n            \"onnx 8 encode_text 0.052\\n\",\n            \"torch 8 encode_text 0.102\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 16 encode_image 0.123\\n\",\n            \"torch 16 encode_image 0.258\\n\",\n            \"onnx 16 encode_text 0.08\\n\",\n            \"torch 16 encode_text 0.201\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 32 encode_image 0.196\\n\",\n            \"torch 32 encode_image 0.505\\n\",\n            \"onnx 32 encode_text 0.138\\n\",\n            \"torch 32 encode_text 0.386\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 64 encode_image 0.352\\n\",\n            \"torch 64 encode_image 0.995\\n\",\n            \"onnx 64 encode_text 0.252\\n\",\n            \"torch 64 encode_text 0.754\\n\",\n            \"------------------------------------------------------------------------------\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import pandas as pd\"\n      ],\n      \"metadata\": {\n        \"id\": \"P2YhbE9v_4ci\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"pd.DataFrame({\\\"backend\\\": [\\\"onnx\\\", \\\"torch\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64],\\n\",\n        \"              \\\"encode_image\\\": [j[1] for i in zip(onnx_results[\\\"encode_image\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_image\\\"]) for j in i],\\n\",\n        \"              \\\"encode_text\\\": [j[1] for i in zip(onnx_results[\\\"encode_text\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_text\\\"]) for j in i]})\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 362\n        },\n        \"id\": \"WfZfDk4PAlqm\",\n        \"outputId\": \"aa180c38-35f8-403c-a172-4e78266510d5\"\n      },\n      \"execution_count\": 12,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-1af743e1-b1a7-4998-8692-878c02e1ea97\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>backend</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.073</td>\\n\",\n              \"      <td>0.032</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.041</td>\\n\",\n              \"      <td>0.033</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.088</td>\\n\",\n              \"      <td>0.052</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.128</td>\\n\",\n              \"      <td>0.102</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.123</td>\\n\",\n              \"      <td>0.080</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>5</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.258</td>\\n\",\n              \"      <td>0.201</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>6</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.196</td>\\n\",\n              \"      <td>0.138</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>7</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.505</td>\\n\",\n              \"      <td>0.386</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>8</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.352</td>\\n\",\n              \"      <td>0.252</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>9</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.995</td>\\n\",\n              \"      <td>0.754</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-1af743e1-b1a7-4998-8692-878c02e1ea97')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-1af743e1-b1a7-4998-8692-878c02e1ea97 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-1af743e1-b1a7-4998-8692-878c02e1ea97');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ],\n            \"text/plain\": [\n              \"  backend  batch  encode_image  encode_text\\n\",\n              \"0    onnx      2         0.073        0.032\\n\",\n              \"1   torch      2         0.041        0.033\\n\",\n              \"2    onnx      8         0.088        0.052\\n\",\n              \"3   torch      8         0.128        0.102\\n\",\n              \"4    onnx     16         0.123        0.080\\n\",\n              \"5   torch     16         0.258        0.201\\n\",\n              \"6    onnx     32         0.196        0.138\\n\",\n              \"7   torch     32         0.505        0.386\\n\",\n              \"8    onnx     64         0.352        0.252\\n\",\n              \"9   torch     64         0.995        0.754\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 12\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df = pd.DataFrame({\\\"ONNX\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in onnx_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in onnx_results[\\\"encode_text\\\"]]})\\n\",\n        \"onnx_df[\\\"summary\\\"] = onnx_df[\\\"encode_image\\\"] + onnx_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"Xpw9lV7yBbA8\"\n      },\n      \"execution_count\": 13,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 206\n        },\n        \"id\": \"LItAyQkeDhnQ\",\n        \"outputId\": \"ebd84ad1-f305-4578-9164-2884aaa2b245\"\n      },\n      \"execution_count\": 14,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-37ded07e-3c67-4ace-b569-8fa6f4dd5a44\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>ONNX</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"      <th>summary</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.073</td>\\n\",\n              \"      <td>0.032</td>\\n\",\n              \"      <td>0.105</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.088</td>\\n\",\n              \"      <td>0.052</td>\\n\",\n              \"      <td>0.140</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.123</td>\\n\",\n              \"      <td>0.080</td>\\n\",\n              \"      <td>0.203</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.196</td>\\n\",\n              \"      <td>0.138</td>\\n\",\n              \"      <td>0.334</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.352</td>\\n\",\n              \"      <td>0.252</td>\\n\",\n              \"      <td>0.604</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-37ded07e-3c67-4ace-b569-8fa6f4dd5a44')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-37ded07e-3c67-4ace-b569-8fa6f4dd5a44 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-37ded07e-3c67-4ace-b569-8fa6f4dd5a44');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ],\n            \"text/plain\": [\n              \"       ONNX  batch  encode_image  encode_text  summary\\n\",\n              \"0  ViT-B/32      2         0.073        0.032    0.105\\n\",\n              \"1  ViT-B/32      8         0.088        0.052    0.140\\n\",\n              \"2  ViT-B/32     16         0.123        0.080    0.203\\n\",\n              \"3  ViT-B/32     32         0.196        0.138    0.334\\n\",\n              \"4  ViT-B/32     64         0.352        0.252    0.604\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 14\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(onnx_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"AIQDA9FaJZ7Y\",\n        \"outputId\": \"4fdfd92a-5c8c-43d9-e875-7bcddc882113\"\n      },\n      \"execution_count\": 15,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| ONNX     |   batch |   encode_image |   encode_text |   summary |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|----------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.073 |         0.032 |     0.105 |\\n\",\n            \"| ViT-B/32 |       8 |          0.088 |         0.052 |     0.14  |\\n\",\n            \"| ViT-B/32 |      16 |          0.123 |         0.08  |     0.203 |\\n\",\n            \"| ViT-B/32 |      32 |          0.196 |         0.138 |     0.334 |\\n\",\n            \"| ViT-B/32 |      64 |          0.352 |         0.252 |     0.604 |\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"clip_df = pd.DataFrame({\\\"TORCH\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in clip_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in clip_results[\\\"encode_text\\\"]]})\\n\",\n        \"clip_df[\\\"summary\\\"] = clip_df[\\\"encode_image\\\"] + clip_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"E1OXQUDvDZmI\"\n      },\n      \"execution_count\": 16,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(clip_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"xAj-ynhCDpPO\",\n        \"outputId\": \"6a36903d-6bba-4675-8eb3-7f58af98e165\"\n      },\n      \"execution_count\": 17,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| TORCH    |   batch |   encode_image |   encode_text |   summary |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|----------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.041 |         0.033 |     0.074 |\\n\",\n            \"| ViT-B/32 |       8 |          0.128 |         0.102 |     0.23  |\\n\",\n            \"| ViT-B/32 |      16 |          0.258 |         0.201 |     0.459 |\\n\",\n            \"| ViT-B/32 |      32 |          0.505 |         0.386 |     0.891 |\\n\",\n            \"| ViT-B/32 |      64 |          0.995 |         0.754 |     1.749 |\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/dev/clip_onnx_benchmark_gpu_K80.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"clip-onnx-benchmark-gpu-K80.ipynb\",\n      \"provenance\": [],\n      \"authorship_tag\": \"ABX9TyOXxz4T8v9RCW/JZlRRUtl4\",\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/dev/examples/dev/clip_onnx_benchmark_gpu_K80.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"fxPg_VvZuScV\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 2,\n      \"metadata\": {\n        \"id\": \"al_QNjyFq6Jj\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git@dev\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"42eeJz9lTdJ6\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XuauIZIBSEUX\",\n        \"outputId\": \"3bfb5833-272d-4aa0-f296-edab8122547c\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Tue May  3 07:20:58 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   56C    P8    29W / 149W |      0MiB / 11441MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"gqvxpdajRX5_\",\n        \"outputId\": \"bb8e9195-fe9c-421c-e27b-d76da7136b82\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## GPU inference mode\\n\",\n        \"Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"010k-ksVTjAu\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"KdTz0IJWVBqE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0)  # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]) # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int32)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9ROPwKYurOhP\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ao2MriaVVG6Y\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx\\n\",\n        \"from clip_onnx.utils import DEFAULT_EXPORT\\n\",\n        \"\\n\",\n        \"DEFAULT_EXPORT[\\\"opset_version\\\"] = 15\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"nSeG9uAZrcph\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"1d4a8404-104f-4107-f2c4-e7e1f7b1d104\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_helper.py:719: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\n\",\n            \"  warnings.warn(\\\"allowzero=0 by default. In order to honor zero value in shape use allowzero=1\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:2909: UserWarning: Exporting aten::index operator of advanced indexing in opset 15 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\\n\",\n            \"  \\\"If indices include negative values, the exported graph will produce incorrect results.\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.load_onnx(\\\"/content/clip_visual.onnx\\\",\\n\",\n        \"                     \\\"/content/clip_textual.onnx\\\",\\n\",\n        \"                     model.logit_scale.exp())\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"PsDS7ty79zZf\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers()\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"aZsGJNrbNCYe\",\n        \"outputId\": \"b0ee40a7-2ece-4e88-9e35-9ed0a735c533\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CUDAExecutionProvider', 'CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 7\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Benchmark\"\n      ],\n      \"metadata\": {\n        \"id\": \"J5IcOG_6jAFz\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cuda\\\", jit=False)\"\n      ],\n      \"metadata\": {\n        \"id\": \"SJ_5_x7vLepK\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model.eval()\\n\",\n        \"for x in model.parameters():\\n\",\n        \"    x.requires_grad = False\"\n      ],\n      \"metadata\": {\n        \"id\": \"OnOzZ3LMuubW\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import numpy, random, torch\"\n      ],\n      \"metadata\": {\n        \"id\": \"wDwqRRrTGKUS\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"def set_seed():\\n\",\n        \"    torch.manual_seed(12)\\n\",\n        \"    torch.cuda.manual_seed(12)\\n\",\n        \"    np.random.seed(12)\\n\",\n        \"    random.seed(12)\\n\",\n        \"\\n\",\n        \"    torch.backends.cudnn.deterministic=True\"\n      ],\n      \"metadata\": {\n        \"id\": \"9H17n_6gGJgT\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import torch\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"n = 5\\n\",\n        \"clip_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"onnx_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"for batch in [2, 8, 16, 32, 64]:\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        image_input = torch.randint(1, 255, (batch, 3, 224, 224))\\n\",\n        \"        image_input_onnx = image_input.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_image(image_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            image_input = torch.randint(1, 255, (batch, 3, 224, 224)).cuda()\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_image(image_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        text_input = torch.randint(320, 49407, (batch, 77))\\n\",\n        \"        text_input_onnx = text_input.detach().cpu().numpy().astype(np.int32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_text(text_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            text_input = torch.randint(320, 49407, (batch, 77)).cuda()\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_text(text_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    print(\\\"-\\\" * 78)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"4lFL6tzWjiWL\",\n        \"outputId\": \"ccaa7e0a-96f3-4a51-c4bd-c442aa13763c\"\n      },\n      \"execution_count\": 12,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"onnx 2 encode_image 0.136\\n\",\n            \"torch 2 encode_image 0.02\\n\",\n            \"onnx 2 encode_text 0.021\\n\",\n            \"torch 2 encode_text 0.035\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 8 encode_image 0.054\\n\",\n            \"torch 8 encode_image 0.081\\n\",\n            \"onnx 8 encode_text 0.04\\n\",\n            \"torch 8 encode_text 0.098\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 16 encode_image 0.089\\n\",\n            \"torch 16 encode_image 0.207\\n\",\n            \"onnx 16 encode_text 0.071\\n\",\n            \"torch 16 encode_text 0.196\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 32 encode_image 0.158\\n\",\n            \"torch 32 encode_image 0.44\\n\",\n            \"onnx 32 encode_text 0.134\\n\",\n            \"torch 32 encode_text 0.374\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 64 encode_image 0.325\\n\",\n            \"torch 64 encode_image 0.919\\n\",\n            \"onnx 64 encode_text 0.258\\n\",\n            \"torch 64 encode_text 0.719\\n\",\n            \"------------------------------------------------------------------------------\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import pandas as pd\"\n      ],\n      \"metadata\": {\n        \"id\": \"P2YhbE9v_4ci\"\n      },\n      \"execution_count\": 13,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"pd.DataFrame({\\\"backend\\\": [\\\"onnx\\\", \\\"torch\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64],\\n\",\n        \"              \\\"encode_image\\\": [j[1] for i in zip(onnx_results[\\\"encode_image\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_image\\\"]) for j in i],\\n\",\n        \"              \\\"encode_text\\\": [j[1] for i in zip(onnx_results[\\\"encode_text\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_text\\\"]) for j in i]})\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 362\n        },\n        \"id\": \"WfZfDk4PAlqm\",\n        \"outputId\": \"78a5cae8-68ee-4edd-f34d-ccf7d3d8a23b\"\n      },\n      \"execution_count\": 14,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"  backend  batch  encode_image  encode_text\\n\",\n              \"0    onnx      2         0.136        0.021\\n\",\n              \"1   torch      2         0.020        0.035\\n\",\n              \"2    onnx      8         0.054        0.040\\n\",\n              \"3   torch      8         0.081        0.098\\n\",\n              \"4    onnx     16         0.089        0.071\\n\",\n              \"5   torch     16         0.207        0.196\\n\",\n              \"6    onnx     32         0.158        0.134\\n\",\n              \"7   torch     32         0.440        0.374\\n\",\n              \"8    onnx     64         0.325        0.258\\n\",\n              \"9   torch     64         0.919        0.719\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-253653f6-3c54-446c-8c64-9345630eaf7b\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>backend</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.136</td>\\n\",\n              \"      <td>0.021</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.020</td>\\n\",\n              \"      <td>0.035</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.054</td>\\n\",\n              \"      <td>0.040</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.081</td>\\n\",\n              \"      <td>0.098</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.089</td>\\n\",\n              \"      <td>0.071</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>5</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.207</td>\\n\",\n              \"      <td>0.196</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>6</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.158</td>\\n\",\n              \"      <td>0.134</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>7</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.440</td>\\n\",\n              \"      <td>0.374</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>8</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.325</td>\\n\",\n              \"      <td>0.258</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>9</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.919</td>\\n\",\n              \"      <td>0.719</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-253653f6-3c54-446c-8c64-9345630eaf7b')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-253653f6-3c54-446c-8c64-9345630eaf7b button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-253653f6-3c54-446c-8c64-9345630eaf7b');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 14\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df = pd.DataFrame({\\\"ONNX\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in onnx_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in onnx_results[\\\"encode_text\\\"]]})\\n\",\n        \"onnx_df[\\\"total\\\"] = onnx_df[\\\"encode_image\\\"] + onnx_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"Xpw9lV7yBbA8\"\n      },\n      \"execution_count\": 15,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 206\n        },\n        \"id\": \"LItAyQkeDhnQ\",\n        \"outputId\": \"f9c1860c-e405-4d41-e530-d2b0027f1fd0\"\n      },\n      \"execution_count\": 16,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"       ONNX  batch  encode_image  encode_text  total\\n\",\n              \"0  ViT-B/32      2         0.136        0.021  0.157\\n\",\n              \"1  ViT-B/32      8         0.054        0.040  0.094\\n\",\n              \"2  ViT-B/32     16         0.089        0.071  0.160\\n\",\n              \"3  ViT-B/32     32         0.158        0.134  0.292\\n\",\n              \"4  ViT-B/32     64         0.325        0.258  0.583\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-fee38102-dd90-4015-a566-69309cf3ae5f\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>ONNX</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"      <th>total</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.136</td>\\n\",\n              \"      <td>0.021</td>\\n\",\n              \"      <td>0.157</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.054</td>\\n\",\n              \"      <td>0.040</td>\\n\",\n              \"      <td>0.094</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.089</td>\\n\",\n              \"      <td>0.071</td>\\n\",\n              \"      <td>0.160</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.158</td>\\n\",\n              \"      <td>0.134</td>\\n\",\n              \"      <td>0.292</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.325</td>\\n\",\n              \"      <td>0.258</td>\\n\",\n              \"      <td>0.583</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-fee38102-dd90-4015-a566-69309cf3ae5f')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-fee38102-dd90-4015-a566-69309cf3ae5f button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-fee38102-dd90-4015-a566-69309cf3ae5f');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 16\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(onnx_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"AIQDA9FaJZ7Y\",\n        \"outputId\": \"36aa68bb-8ebb-47de-d2b4-b8ce36cacfd7\"\n      },\n      \"execution_count\": 17,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| ONNX     |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.136 |         0.021 |   0.157 |\\n\",\n            \"| ViT-B/32 |       8 |          0.054 |         0.04  |   0.094 |\\n\",\n            \"| ViT-B/32 |      16 |          0.089 |         0.071 |   0.16  |\\n\",\n            \"| ViT-B/32 |      32 |          0.158 |         0.134 |   0.292 |\\n\",\n            \"| ViT-B/32 |      64 |          0.325 |         0.258 |   0.583 |\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"clip_df = pd.DataFrame({\\\"TORCH\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in clip_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in clip_results[\\\"encode_text\\\"]]})\\n\",\n        \"clip_df[\\\"total\\\"] = clip_df[\\\"encode_image\\\"] + clip_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"E1OXQUDvDZmI\"\n      },\n      \"execution_count\": 18,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(clip_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"xAj-ynhCDpPO\",\n        \"outputId\": \"6f31dab3-8b2a-4b64-ed97-2ac309d6d749\"\n      },\n      \"execution_count\": 19,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| TORCH    |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.02  |         0.035 |   0.055 |\\n\",\n            \"| ViT-B/32 |       8 |          0.081 |         0.098 |   0.179 |\\n\",\n            \"| ViT-B/32 |      16 |          0.207 |         0.196 |   0.403 |\\n\",\n            \"| ViT-B/32 |      32 |          0.44  |         0.374 |   0.814 |\\n\",\n            \"| ViT-B/32 |      64 |          0.919 |         0.719 |   1.638 |\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/dev/clip_onnx_benchmark_gpu_T4.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"clip-onnx-benchmark-gpu-T4.ipynb\",\n      \"provenance\": [],\n      \"authorship_tag\": \"ABX9TyNqeHpYdbkhiqZatysOn5ch\",\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\"\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/dev/examples/dev/clip_onnx_benchmark_gpu_T4.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"fxPg_VvZuScV\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": 1,\n      \"metadata\": {\n        \"id\": \"al_QNjyFq6Jj\"\n      },\n      \"outputs\": [],\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git@dev\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"42eeJz9lTdJ6\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"XuauIZIBSEUX\",\n        \"outputId\": \"3e459c2c-8f31-4aff-c288-f2e6c4684e36\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Tue May  3 07:10:09 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   38C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"gqvxpdajRX5_\",\n        \"outputId\": \"48a89abb-a326-4563-f99a-40c7d25145af\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## GPU inference mode\\n\",\n        \"Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"010k-ksVTjAu\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### Torch CLIP\"\n      ],\n      \"metadata\": {\n        \"id\": \"KdTz0IJWVBqE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot work with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0)  # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]) # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int32)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9ROPwKYurOhP\"\n      },\n      \"execution_count\": 3,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"### CLIP-ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Ao2MriaVVG6Y\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"nSeG9uAZrcph\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"1186f909-6cfb-400b-c2d9-3dddc93d318b\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\",\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:2909: UserWarning: Exporting aten::index operator of advanced indexing in opset 12 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\\n\",\n            \"  \\\"If indices include negative values, the exported graph will produce incorrect results.\\\")\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model = clip_onnx(model)\\n\",\n        \"onnx_model.load_onnx(\\\"/content/clip_visual.onnx\\\",\\n\",\n        \"                     \\\"/content/clip_textual.onnx\\\",\\n\",\n        \"                     model.logit_scale.exp())\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # GPU mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"PsDS7ty79zZf\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers()\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"aZsGJNrbNCYe\",\n        \"outputId\": \"05464d1a-7047-4efd-80fe-32870cf34afd\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CUDAExecutionProvider', 'CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 6\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Benchmark\"\n      ],\n      \"metadata\": {\n        \"id\": \"J5IcOG_6jAFz\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cuda\\\", jit=False)\"\n      ],\n      \"metadata\": {\n        \"id\": \"SJ_5_x7vLepK\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model.eval()\\n\",\n        \"for x in model.parameters():\\n\",\n        \"    x.requires_grad = False\"\n      ],\n      \"metadata\": {\n        \"id\": \"OnOzZ3LMuubW\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import numpy, random, torch\"\n      ],\n      \"metadata\": {\n        \"id\": \"wDwqRRrTGKUS\"\n      },\n      \"execution_count\": 9,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"def set_seed():\\n\",\n        \"    torch.manual_seed(12)\\n\",\n        \"    torch.cuda.manual_seed(12)\\n\",\n        \"    np.random.seed(12)\\n\",\n        \"    random.seed(12)\\n\",\n        \"\\n\",\n        \"    torch.backends.cudnn.deterministic=True\"\n      ],\n      \"metadata\": {\n        \"id\": \"9H17n_6gGJgT\"\n      },\n      \"execution_count\": 10,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import torch\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"n = 5\\n\",\n        \"clip_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"onnx_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"for batch in [2, 8, 16, 32, 64]:\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        image_input = torch.randint(1, 255, (batch, 3, 224, 224))\\n\",\n        \"        image_input_onnx = image_input.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_image(image_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            image_input = torch.randint(1, 255, (batch, 3, 224, 224)).cuda()\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_image(image_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_image\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_image\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    t_mean = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        text_input = torch.randint(320, 49407, (batch, 77))\\n\",\n        \"        text_input_onnx = text_input.detach().cpu().numpy().astype(np.int32)\\n\",\n        \"        t = time.time()\\n\",\n        \"        onnx_model.encode_text(text_input_onnx)\\n\",\n        \"        t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    onnx_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        t_mean = []\\n\",\n        \"        for _ in range(n):\\n\",\n        \"            text_input = torch.randint(320, 49407, (batch, 77)).cuda()\\n\",\n        \"            t = time.time()\\n\",\n        \"            model.encode_text(text_input)\\n\",\n        \"            t_mean.append(time.time() - t)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_text\\\", round(sum(t_mean) / n, 3))\\n\",\n        \"    torch.cuda.empty_cache()\\n\",\n        \"    clip_results[\\\"encode_text\\\"].append([batch, round(sum(t_mean) / n, 3)])\\n\",\n        \"\\n\",\n        \"    print(\\\"-\\\" * 78)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"4lFL6tzWjiWL\",\n        \"outputId\": \"c2b9f0e4-9b93-408b-96bf-3fdb3057e15b\"\n      },\n      \"execution_count\": 11,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"onnx 2 encode_image 0.155\\n\",\n            \"torch 2 encode_image 0.017\\n\",\n            \"onnx 2 encode_text 0.01\\n\",\n            \"torch 2 encode_text 0.009\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 8 encode_image 0.032\\n\",\n            \"torch 8 encode_image 0.008\\n\",\n            \"onnx 8 encode_text 0.014\\n\",\n            \"torch 8 encode_text 0.008\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 16 encode_image 0.037\\n\",\n            \"torch 16 encode_image 0.009\\n\",\n            \"onnx 16 encode_text 0.029\\n\",\n            \"torch 16 encode_text 0.012\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 32 encode_image 0.076\\n\",\n            \"torch 32 encode_image 0.008\\n\",\n            \"onnx 32 encode_text 0.059\\n\",\n            \"torch 32 encode_text 0.025\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 64 encode_image 0.169\\n\",\n            \"torch 64 encode_image 0.009\\n\",\n            \"onnx 64 encode_text 0.117\\n\",\n            \"torch 64 encode_text 0.049\\n\",\n            \"------------------------------------------------------------------------------\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import pandas as pd\"\n      ],\n      \"metadata\": {\n        \"id\": \"P2YhbE9v_4ci\"\n      },\n      \"execution_count\": 12,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"pd.DataFrame({\\\"backend\\\": [\\\"onnx\\\", \\\"torch\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64],\\n\",\n        \"              \\\"encode_image\\\": [j[1] for i in zip(onnx_results[\\\"encode_image\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_image\\\"]) for j in i],\\n\",\n        \"              \\\"encode_text\\\": [j[1] for i in zip(onnx_results[\\\"encode_text\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_text\\\"]) for j in i]})\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 362\n        },\n        \"id\": \"WfZfDk4PAlqm\",\n        \"outputId\": \"3375eac7-47b0-40ba-c2d6-c30fda2ab6d5\"\n      },\n      \"execution_count\": 13,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"  backend  batch  encode_image  encode_text\\n\",\n              \"0    onnx      2         0.155        0.010\\n\",\n              \"1   torch      2         0.017        0.009\\n\",\n              \"2    onnx      8         0.032        0.014\\n\",\n              \"3   torch      8         0.008        0.008\\n\",\n              \"4    onnx     16         0.037        0.029\\n\",\n              \"5   torch     16         0.009        0.012\\n\",\n              \"6    onnx     32         0.076        0.059\\n\",\n              \"7   torch     32         0.008        0.025\\n\",\n              \"8    onnx     64         0.169        0.117\\n\",\n              \"9   torch     64         0.009        0.049\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-7accc06f-b13e-47ae-837f-e962ce3f48e2\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>backend</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.155</td>\\n\",\n              \"      <td>0.010</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.017</td>\\n\",\n              \"      <td>0.009</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.032</td>\\n\",\n              \"      <td>0.014</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.008</td>\\n\",\n              \"      <td>0.008</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.037</td>\\n\",\n              \"      <td>0.029</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>5</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.009</td>\\n\",\n              \"      <td>0.012</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>6</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.076</td>\\n\",\n              \"      <td>0.059</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>7</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.008</td>\\n\",\n              \"      <td>0.025</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>8</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.169</td>\\n\",\n              \"      <td>0.117</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>9</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.009</td>\\n\",\n              \"      <td>0.049</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-7accc06f-b13e-47ae-837f-e962ce3f48e2')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-7accc06f-b13e-47ae-837f-e962ce3f48e2 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-7accc06f-b13e-47ae-837f-e962ce3f48e2');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 13\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df = pd.DataFrame({\\\"ONNX\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in onnx_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in onnx_results[\\\"encode_text\\\"]]})\\n\",\n        \"onnx_df[\\\"total\\\"] = onnx_df[\\\"encode_image\\\"] + onnx_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"Xpw9lV7yBbA8\"\n      },\n      \"execution_count\": 14,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 206\n        },\n        \"id\": \"LItAyQkeDhnQ\",\n        \"outputId\": \"e6c88747-5eba-4c16-be40-d4de584f429e\"\n      },\n      \"execution_count\": 15,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"       ONNX  batch  encode_image  encode_text  total\\n\",\n              \"0  ViT-B/32      2         0.155        0.010  0.165\\n\",\n              \"1  ViT-B/32      8         0.032        0.014  0.046\\n\",\n              \"2  ViT-B/32     16         0.037        0.029  0.066\\n\",\n              \"3  ViT-B/32     32         0.076        0.059  0.135\\n\",\n              \"4  ViT-B/32     64         0.169        0.117  0.286\"\n            ],\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-ec0578ab-55c5-4651-beb1-5a0710dffb36\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>ONNX</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"      <th>total</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.155</td>\\n\",\n              \"      <td>0.010</td>\\n\",\n              \"      <td>0.165</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.032</td>\\n\",\n              \"      <td>0.014</td>\\n\",\n              \"      <td>0.046</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.037</td>\\n\",\n              \"      <td>0.029</td>\\n\",\n              \"      <td>0.066</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.076</td>\\n\",\n              \"      <td>0.059</td>\\n\",\n              \"      <td>0.135</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>ViT-B/32</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.169</td>\\n\",\n              \"      <td>0.117</td>\\n\",\n              \"      <td>0.286</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-ec0578ab-55c5-4651-beb1-5a0710dffb36')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-ec0578ab-55c5-4651-beb1-5a0710dffb36 button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-ec0578ab-55c5-4651-beb1-5a0710dffb36');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 15\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(onnx_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"AIQDA9FaJZ7Y\",\n        \"outputId\": \"8b197c3c-63d1-42c4-8ca3-a3258acfc878\"\n      },\n      \"execution_count\": 16,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| ONNX     |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.155 |         0.01  |   0.165 |\\n\",\n            \"| ViT-B/32 |       8 |          0.032 |         0.014 |   0.046 |\\n\",\n            \"| ViT-B/32 |      16 |          0.037 |         0.029 |   0.066 |\\n\",\n            \"| ViT-B/32 |      32 |          0.076 |         0.059 |   0.135 |\\n\",\n            \"| ViT-B/32 |      64 |          0.169 |         0.117 |   0.286 |\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"clip_df = pd.DataFrame({\\\"TORCH\\\": [\\\"ViT-B/32\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in clip_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in clip_results[\\\"encode_text\\\"]]})\\n\",\n        \"clip_df[\\\"total\\\"] = clip_df[\\\"encode_image\\\"] + clip_df[\\\"encode_text\\\"]\"\n      ],\n      \"metadata\": {\n        \"id\": \"E1OXQUDvDZmI\"\n      },\n      \"execution_count\": 17,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"print(clip_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"xAj-ynhCDpPO\",\n        \"outputId\": \"f90bc132-4727-45df-a6c2-49e2a68e0a4a\"\n      },\n      \"execution_count\": 18,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| TORCH    |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:---------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| ViT-B/32 |       2 |          0.017 |         0.009 |   0.026 |\\n\",\n            \"| ViT-B/32 |       8 |          0.008 |         0.008 |   0.016 |\\n\",\n            \"| ViT-B/32 |      16 |          0.009 |         0.012 |   0.021 |\\n\",\n            \"| ViT-B/32 |      32 |          0.008 |         0.025 |   0.033 |\\n\",\n            \"| ViT-B/32 |      64 |          0.009 |         0.049 |   0.058 |\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "examples/readme_example.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"readme_example.ipynb\",\n      \"provenance\": [],\n      \"authorship_tag\": \"ABX9TyPpME0Qdi/m3VZQ+jNj39dT\",\n      \"include_colab_link\": true\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"view-in-github\",\n        \"colab_type\": \"text\"\n      },\n      \"source\": [\n        \"<a href=\\\"https://colab.research.google.com/github/Lednik7/CLIP-ONNX/blob/main/examples/readme_example.ipynb\\\" target=\\\"_parent\\\"><img src=\\\"https://colab.research.google.com/assets/colab-badge.svg\\\" alt=\\\"Open In Colab\\\"/></a>\"\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload the session if something doesn't work\"\n      ],\n      \"metadata\": {\n        \"id\": \"whlsBiJgR8le\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\\n\",\n        \"!pip install git+https://github.com/openai/CLIP.git\\n\",\n        \"!pip install onnxruntime-gpu\"\n      ],\n      \"metadata\": {\n        \"id\": \"HnbpAkvuR73L\"\n      },\n      \"execution_count\": 1,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"tqy0zKM4R-7M\"\n      },\n      \"execution_count\": 2,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"!nvidia-smi # CPU Provider\"\n      ],\n      \"metadata\": {\n        \"id\": \"eKqETHL4YscZ\",\n        \"outputId\": \"7ff0bc18-fb40-4296-ab05-b079043e46a1\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        }\n      },\n      \"execution_count\": 3,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.\\n\",\n            \"\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"\\n\",\n        \"print(onnxruntime.get_device()) # priority device\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"x8IN72OnSAIh\",\n        \"outputId\": \"81d14047-91fa-4a5c-a1e3-f5b550556591\"\n      },\n      \"execution_count\": 4,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"CPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## CPU inference mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"U1Pr-YTtSEhs\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import warnings\\n\",\n        \"\\n\",\n        \"warnings.filterwarnings(\\\"ignore\\\", category=UserWarning)\"\n      ],\n      \"metadata\": {\n        \"id\": \"gZTxanR26knr\"\n      },\n      \"execution_count\": 5,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import clip\\n\",\n        \"from PIL import Image\\n\",\n        \"import numpy as np\\n\",\n        \"\\n\",\n        \"# onnx cannot export with cuda\\n\",\n        \"model, preprocess = clip.load(\\\"ViT-B/32\\\", device=\\\"cpu\\\", jit=False)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"image = preprocess(Image.open(\\\"CLIP.png\\\")).unsqueeze(0).cpu() # [1, 3, 224, 224]\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"text = clip.tokenize([\\\"a diagram\\\", \\\"a dog\\\", \\\"a cat\\\"]).cpu() # [3, 77]\\n\",\n        \"text_onnx = text.detach().cpu().numpy().astype(np.int32)\"\n      ],\n      \"metadata\": {\n        \"id\": \"rPwc6A2SSGyl\"\n      },\n      \"execution_count\": 6,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx, attention\\n\",\n        \"# clip.model.ResidualAttentionBlock.attention = attention\\n\",\n        \"\\n\",\n        \"visual_path = \\\"clip_visual.onnx\\\"\\n\",\n        \"textual_path = \\\"clip_textual.onnx\\\"\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model, visual_path=visual_path, textual_path=textual_path)\\n\",\n        \"onnx_model.convert2onnx(image, text, verbose=True)\\n\",\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CPUExecutionProvider\\\"]) # cpu mode\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"oYM5FDSGSJBW\",\n        \"outputId\": \"816705b1-3829-4424-c7c4-5426cf21cc18\"\n      },\n      \"execution_count\": 7,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\",\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\",\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"image_features = onnx_model.encode_image(image_onnx)\\n\",\n        \"text_features = onnx_model.encode_text(text_onnx)\\n\",\n        \"\\n\",\n        \"logits_per_image, logits_per_text = onnx_model(image_onnx, text_onnx)\\n\",\n        \"probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()\\n\",\n        \"\\n\",\n        \"print(\\\"Label probs:\\\", probs)  # prints: [[0.9927937  0.00421067 0.00299571]]\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"tYVuk72nSLw6\",\n        \"outputId\": \"41608059-3732-4ea7-c619-66f803af4185\"\n      },\n      \"execution_count\": 8,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"Label probs: [[0.9927937  0.00421067 0.00299571]]\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}\n"
  },
  {
    "path": "examples/ru_CLIP_tiny_onnx.ipynb",
    "content": "{\n  \"nbformat\": 4,\n  \"nbformat_minor\": 0,\n  \"metadata\": {\n    \"colab\": {\n      \"name\": \"ru_CLIP_tiny_onnx.ipynb\",\n      \"provenance\": [],\n      \"collapsed_sections\": [\n        \"WWXCt_2NLhN_\",\n        \"PHb4CAoRL3qC\",\n        \"re2sSYAYO3D-\",\n        \"ithu4-z0PIm5\",\n        \"FWm0GAhWPzSW\"\n      ],\n      \"machine_shape\": \"hm\"\n    },\n    \"kernelspec\": {\n      \"name\": \"python3\",\n      \"display_name\": \"Python 3\"\n    },\n    \"language_info\": {\n      \"name\": \"python\"\n    },\n    \"accelerator\": \"GPU\",\n    \"widgets\": {\n      \"application/vnd.jupyter.widget-state+json\": {\n        \"5319c7971f234d4bb615508f76475f9e\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HBoxModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HBoxView\",\n            \"_dom_classes\": [],\n            \"_model_name\": \"HBoxModel\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"box_style\": \"\",\n            \"layout\": \"IPY_MODEL_c43027a0735e459ca1f710e5a9c43177\",\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"children\": [\n              \"IPY_MODEL_c00c959249db4f2a9b97adabd2684c3c\",\n              \"IPY_MODEL_d9e4edd05c1e40f991eb2c2f1fc9ebc1\",\n              \"IPY_MODEL_ab4928c0a86449a384e36d8c0bc25717\"\n            ]\n          }\n        },\n        \"c43027a0735e459ca1f710e5a9c43177\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"c00c959249db4f2a9b97adabd2684c3c\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_3251223dac8f43c081701ff7f663cb35\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \"Downloading: 100%\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_d63d5559ce534b86969132d3ff8d875b\"\n          }\n        },\n        \"d9e4edd05c1e40f991eb2c2f1fc9ebc1\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"FloatProgressModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"ProgressView\",\n            \"style\": \"IPY_MODEL_59618f021fc4495e9c401a421d28d4a0\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"FloatProgressModel\",\n            \"bar_style\": \"success\",\n            \"max\": 381781,\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": 381781,\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"orientation\": \"horizontal\",\n            \"min\": 0,\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_c56ba935682647dca4bdcc593fe0d2cc\"\n          }\n        },\n        \"ab4928c0a86449a384e36d8c0bc25717\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_01808c7fec8447368d60a33b2d683851\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \" 373k/373k [00:00&lt;00:00, 876kB/s]\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_f9466e0349c84633a0fb8ceeffa2a984\"\n          }\n        },\n        \"3251223dac8f43c081701ff7f663cb35\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"d63d5559ce534b86969132d3ff8d875b\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"59618f021fc4495e9c401a421d28d4a0\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"ProgressStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"ProgressStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"bar_color\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"c56ba935682647dca4bdcc593fe0d2cc\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"01808c7fec8447368d60a33b2d683851\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"f9466e0349c84633a0fb8ceeffa2a984\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"10ee9777b41e42129e2c9cc9327ad88f\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HBoxModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HBoxView\",\n            \"_dom_classes\": [],\n            \"_model_name\": \"HBoxModel\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"box_style\": \"\",\n            \"layout\": \"IPY_MODEL_cb6a647757244da3941602127ec38ccb\",\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"children\": [\n              \"IPY_MODEL_a64a223312144f2f9736729b63ab1ce5\",\n              \"IPY_MODEL_7e7bce13eeed41179e4e15fc7afc89d5\",\n              \"IPY_MODEL_7bacd13c23cf415fa5d58e9243c4a785\"\n            ]\n          }\n        },\n        \"cb6a647757244da3941602127ec38ccb\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"a64a223312144f2f9736729b63ab1ce5\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_9fe6e1167e5d45fbad2adab3d59e017d\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \"Downloading: 100%\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_655c507d8fcf423f8bd6746201f569ae\"\n          }\n        },\n        \"7e7bce13eeed41179e4e15fc7afc89d5\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"FloatProgressModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"ProgressView\",\n            \"style\": \"IPY_MODEL_8c4812afaaec4d65bf84a1e77840d356\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"FloatProgressModel\",\n            \"bar_style\": \"success\",\n            \"max\": 112,\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": 112,\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"orientation\": \"horizontal\",\n            \"min\": 0,\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_64dfa71e3dff4236908e0592e4f90250\"\n          }\n        },\n        \"7bacd13c23cf415fa5d58e9243c4a785\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_ec8d98c1edb148d3ae1c518b61e8155b\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \" 112/112 [00:00&lt;00:00, 3.41kB/s]\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_a8212828565d4c9884d44fb45dc51ee5\"\n          }\n        },\n        \"9fe6e1167e5d45fbad2adab3d59e017d\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"655c507d8fcf423f8bd6746201f569ae\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"8c4812afaaec4d65bf84a1e77840d356\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"ProgressStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"ProgressStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"bar_color\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"64dfa71e3dff4236908e0592e4f90250\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"ec8d98c1edb148d3ae1c518b61e8155b\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"a8212828565d4c9884d44fb45dc51ee5\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"9a2d4d7da3024cc0828b1a6dafd0dd16\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HBoxModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HBoxView\",\n            \"_dom_classes\": [],\n            \"_model_name\": \"HBoxModel\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"box_style\": \"\",\n            \"layout\": \"IPY_MODEL_fe4daa4d7d024187aa2f622dbf3577a8\",\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"children\": [\n              \"IPY_MODEL_8380bf9a899645e8aef576e640b41ea2\",\n              \"IPY_MODEL_37c593d2f442497483cd0026498bab05\",\n              \"IPY_MODEL_3cc7b132c94f427ba44858e4c4ce3019\"\n            ]\n          }\n        },\n        \"fe4daa4d7d024187aa2f622dbf3577a8\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"8380bf9a899645e8aef576e640b41ea2\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_d2aed3c0f95b4677bd6e949a4ed0403e\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \"Downloading: 100%\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_bbc52e0e0b2f4758bd7d6cf44b4670ae\"\n          }\n        },\n        \"37c593d2f442497483cd0026498bab05\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"FloatProgressModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"ProgressView\",\n            \"style\": \"IPY_MODEL_8513d262d8764d99aa5d3f2f178b875e\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"FloatProgressModel\",\n            \"bar_style\": \"success\",\n            \"max\": 239,\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": 239,\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"orientation\": \"horizontal\",\n            \"min\": 0,\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_57984bbd46a84a7bb2b7629e6b2f9ef9\"\n          }\n        },\n        \"3cc7b132c94f427ba44858e4c4ce3019\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_b886e2b6bbcd46cf806ff3a0b3cb8d33\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \" 239/239 [00:00&lt;00:00, 5.49kB/s]\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_21ac91113f7e4548b416a32b1b3f66a9\"\n          }\n        },\n        \"d2aed3c0f95b4677bd6e949a4ed0403e\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"bbc52e0e0b2f4758bd7d6cf44b4670ae\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"8513d262d8764d99aa5d3f2f178b875e\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"ProgressStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"ProgressStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"bar_color\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"57984bbd46a84a7bb2b7629e6b2f9ef9\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"b886e2b6bbcd46cf806ff3a0b3cb8d33\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"21ac91113f7e4548b416a32b1b3f66a9\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"f8958c6de2394fecab9f95388a365431\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HBoxModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HBoxView\",\n            \"_dom_classes\": [],\n            \"_model_name\": \"HBoxModel\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"box_style\": \"\",\n            \"layout\": \"IPY_MODEL_11a8a4b2d39d4ea8904c0f1b2f6dd906\",\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"children\": [\n              \"IPY_MODEL_9657501af7514a60b30fcd60a223980c\",\n              \"IPY_MODEL_61274b2bac5e4835a8bd33dc201bc155\",\n              \"IPY_MODEL_973300b095554b10ac290244772e0a6f\"\n            ]\n          }\n        },\n        \"11a8a4b2d39d4ea8904c0f1b2f6dd906\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"9657501af7514a60b30fcd60a223980c\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_c3f5f56bb14d44b6a5775a77f6763b94\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \"Downloading: 100%\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_fbf430940c8a49949953155b57d07766\"\n          }\n        },\n        \"61274b2bac5e4835a8bd33dc201bc155\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"FloatProgressModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"ProgressView\",\n            \"style\": \"IPY_MODEL_ab05641bcb9c49aab977110fab503a78\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"FloatProgressModel\",\n            \"bar_style\": \"success\",\n            \"max\": 175,\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": 175,\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"orientation\": \"horizontal\",\n            \"min\": 0,\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_4f11a71d7df943e48ac9ea3bab5c6771\"\n          }\n        },\n        \"973300b095554b10ac290244772e0a6f\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"HTMLModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"HTMLView\",\n            \"style\": \"IPY_MODEL_b6004e09152045e18503cf75e32d4fa6\",\n            \"_dom_classes\": [],\n            \"description\": \"\",\n            \"_model_name\": \"HTMLModel\",\n            \"placeholder\": \"​\",\n            \"_view_module\": \"@jupyter-widgets/controls\",\n            \"_model_module_version\": \"1.5.0\",\n            \"value\": \" 175/175 [00:00&lt;00:00, 5.41kB/s]\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.5.0\",\n            \"description_tooltip\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\",\n            \"layout\": \"IPY_MODEL_590fb707d26948b5b9c8bb3b896f29e1\"\n          }\n        },\n        \"c3f5f56bb14d44b6a5775a77f6763b94\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"fbf430940c8a49949953155b57d07766\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"ab05641bcb9c49aab977110fab503a78\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"ProgressStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"ProgressStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"bar_color\": null,\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"4f11a71d7df943e48ac9ea3bab5c6771\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        },\n        \"b6004e09152045e18503cf75e32d4fa6\": {\n          \"model_module\": \"@jupyter-widgets/controls\",\n          \"model_name\": \"DescriptionStyleModel\",\n          \"model_module_version\": \"1.5.0\",\n          \"state\": {\n            \"_view_name\": \"StyleView\",\n            \"_model_name\": \"DescriptionStyleModel\",\n            \"description_width\": \"\",\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"_model_module_version\": \"1.5.0\",\n            \"_view_count\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"_model_module\": \"@jupyter-widgets/controls\"\n          }\n        },\n        \"590fb707d26948b5b9c8bb3b896f29e1\": {\n          \"model_module\": \"@jupyter-widgets/base\",\n          \"model_name\": \"LayoutModel\",\n          \"model_module_version\": \"1.2.0\",\n          \"state\": {\n            \"_view_name\": \"LayoutView\",\n            \"grid_template_rows\": null,\n            \"right\": null,\n            \"justify_content\": null,\n            \"_view_module\": \"@jupyter-widgets/base\",\n            \"overflow\": null,\n            \"_model_module_version\": \"1.2.0\",\n            \"_view_count\": null,\n            \"flex_flow\": null,\n            \"width\": null,\n            \"min_width\": null,\n            \"border\": null,\n            \"align_items\": null,\n            \"bottom\": null,\n            \"_model_module\": \"@jupyter-widgets/base\",\n            \"top\": null,\n            \"grid_column\": null,\n            \"overflow_y\": null,\n            \"overflow_x\": null,\n            \"grid_auto_flow\": null,\n            \"grid_area\": null,\n            \"grid_template_columns\": null,\n            \"flex\": null,\n            \"_model_name\": \"LayoutModel\",\n            \"justify_items\": null,\n            \"grid_row\": null,\n            \"max_height\": null,\n            \"align_content\": null,\n            \"visibility\": null,\n            \"align_self\": null,\n            \"height\": null,\n            \"min_height\": null,\n            \"padding\": null,\n            \"grid_auto_rows\": null,\n            \"grid_gap\": null,\n            \"max_width\": null,\n            \"order\": null,\n            \"_view_module_version\": \"1.2.0\",\n            \"grid_template_areas\": null,\n            \"object_position\": null,\n            \"object_fit\": null,\n            \"grid_auto_columns\": null,\n            \"margin\": null,\n            \"display\": null,\n            \"left\": null\n          }\n        }\n      }\n    }\n  },\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"# [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/cene555/ru-clip-tiny/blob/main/notebooks/ru_CLIP_tiny_onnx.ipynb)\"\n      ],\n      \"metadata\": {\n        \"id\": \"JsWuTduwaagq\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Select a runtime GPU to continue:\\n\",\n        \"\\n\",\n        \"Click Runtime -> Change Runtime Type -> switch \\\"Harware accelerator\\\" to be GPU. Save it, and you maybe connect to GPU\"\n      ],\n      \"metadata\": {\n        \"id\": \"VCCzmQdKJPkv\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title Allowed Resources\\n\",\n        \"import multiprocessing\\n\",\n        \"import torch\\n\",\n        \"from psutil import virtual_memory\\n\",\n        \"\\n\",\n        \"ram_gb = round(virtual_memory().total / 1024**3, 1)\\n\",\n        \"\\n\",\n        \"print('CPU:', multiprocessing.cpu_count())\\n\",\n        \"print('RAM GB:', ram_gb)\\n\",\n        \"print(\\\"PyTorch version:\\\", torch.__version__)\\n\",\n        \"print(\\\"CUDA version:\\\", torch.version.cuda)\\n\",\n        \"print(\\\"cuDNN version:\\\", torch.backends.cudnn.version())\\n\",\n        \"device = torch.device(\\\"cuda:0\\\" if torch.cuda.is_available() else \\\"cpu\\\")\\n\",\n        \"print(\\\"device:\\\", device.type)\\n\",\n        \"\\n\",\n        \"!nvidia-smi\"\n      ],\n      \"metadata\": {\n        \"cellView\": \"form\",\n        \"id\": \"6xdy_cPJEYXV\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"9b5c5751-3377-4623-fd90-f59c21118c80\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"CPU: 2\\n\",\n            \"RAM GB: 12.7\\n\",\n            \"PyTorch version: 1.10.0+cu111\\n\",\n            \"CUDA version: 11.1\\n\",\n            \"cuDNN version: 8005\\n\",\n            \"device: cuda\\n\",\n            \"Tue Feb  1 17:26:24 2022       \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| NVIDIA-SMI 495.46       Driver Version: 460.32.03    CUDA Version: 11.2     |\\n\",\n            \"|-------------------------------+----------------------+----------------------+\\n\",\n            \"| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\\n\",\n            \"| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\\n\",\n            \"|                               |                      |               MIG M. |\\n\",\n            \"|===============================+======================+======================|\\n\",\n            \"|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |\\n\",\n            \"| N/A   61C    P8    11W /  70W |      3MiB / 15109MiB |      0%      Default |\\n\",\n            \"|                               |                      |                  N/A |\\n\",\n            \"+-------------------------------+----------------------+----------------------+\\n\",\n            \"                                                                               \\n\",\n            \"+-----------------------------------------------------------------------------+\\n\",\n            \"| Processes:                                                                  |\\n\",\n            \"|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\\n\",\n            \"|        ID   ID                                                   Usage      |\\n\",\n            \"|=============================================================================|\\n\",\n            \"|  No running processes found                                                 |\\n\",\n            \"+-----------------------------------------------------------------------------+\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Restart colab session after installation\\n\",\n        \"Reload session if something doesn't work (may need multiple times)\"\n      ],\n      \"metadata\": {\n        \"id\": \"hmNP7iJBj6XZ\"\n      }\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Install requirements\"\n      ],\n      \"metadata\": {\n        \"id\": \"WWXCt_2NLhN_\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"%%capture\\n\",\n        \"!gdown -O ru-clip-tiny.pkl https://drive.google.com/uc?id=1-3g3J90pZmHo9jbBzsEmr7ei5zm3VXOL\\n\",\n        \"\\n\",\n        \"!pip install git+https://github.com/cene555/ru-clip-tiny.git\\n\",\n        \"!pip install git+https://github.com/Lednik7/CLIP-ONNX.git\\n\",\n        \"!pip install onnxruntime-gpu\\n\",\n        \"\\n\",\n        \"!wget -c -O CLIP.png https://github.com/openai/CLIP/blob/main/CLIP.png?raw=true\"\n      ],\n      \"metadata\": {\n        \"id\": \"FWEEtd7Vryaf\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import onnxruntime\\n\",\n        \"\\n\",\n        \"# priority device (if available)\\n\",\n        \"print(onnxruntime.get_device())\"\n      ],\n      \"metadata\": {\n        \"id\": \"bUFx02Dhjap4\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"f595c387-da47-47e5-f96a-2d84adf3286b\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"GPU\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Import libraries\"\n      ],\n      \"metadata\": {\n        \"id\": \"PHb4CAoRL3qC\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import torch\\n\",\n        \"from rucliptiny import RuCLIPtiny\\n\",\n        \"from rucliptiny.utils import get_transform\\n\",\n        \"from rucliptiny.tokenizer import Tokenizer\"\n      ],\n      \"metadata\": {\n        \"id\": \"cznZ7ozDL5-M\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import warnings\\n\",\n        \"\\n\",\n        \"warnings.filterwarnings(\\\"ignore\\\", category=UserWarning)\"\n      ],\n      \"metadata\": {\n        \"id\": \"57COx0BKCmFA\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Load model\"\n      ],\n      \"metadata\": {\n        \"id\": \"ithu4-z0PIm5\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"#@title speed_test function\\n\",\n        \"\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"def speed_test(func, data_gen, n=5, empty_cache=True, is_text=False,\\n\",\n        \"               first_run=True):\\n\",\n        \"    if empty_cache: torch.cuda.empty_cache()\\n\",\n        \"    if first_run:\\n\",\n        \"        if is_text:\\n\",\n        \"            input_data1, input_data2 = data_gen()\\n\",\n        \"            func(input_data1, input_data2)\\n\",\n        \"        else:\\n\",\n        \"            input_data = data_gen()\\n\",\n        \"            func(input_data)\\n\",\n        \"        torch.cuda.empty_cache()\\n\",\n        \"    \\n\",\n        \"    values = []\\n\",\n        \"    for _ in range(n):\\n\",\n        \"        if is_text:\\n\",\n        \"            input_data1, input_data2 = data_gen()\\n\",\n        \"        else:\\n\",\n        \"            input_data = data_gen()\\n\",\n        \"        if is_text:\\n\",\n        \"            t = time.time()\\n\",\n        \"            func(input_data1, input_data2)\\n\",\n        \"        else:\\n\",\n        \"            t = time.time()\\n\",\n        \"            func(input_data)\\n\",\n        \"        values.append(time.time() - t)\\n\",\n        \"        if empty_cache: torch.cuda.empty_cache()\\n\",\n        \"    return sum(values) / n\"\n      ],\n      \"metadata\": {\n        \"id\": \"GqKM04tP4Vv3\",\n        \"cellView\": \"form\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"torch.manual_seed(1)\\n\",\n        \"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\"\n      ],\n      \"metadata\": {\n        \"id\": \"SSOHYDRQGif-\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"model = RuCLIPtiny()\\n\",\n        \"model.load_state_dict(torch.load('ru-clip-tiny.pkl',\\n\",\n        \"                                 map_location=device))\\n\",\n        \"model = model.to(device).eval()\\n\",\n        \"for x in model.parameters(): x.requires_grad = False\\n\",\n        \"torch.cuda.empty_cache()\"\n      ],\n      \"metadata\": {\n        \"id\": \"OpFAZfq-_nJe\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"transforms = get_transform()\\n\",\n        \"tokenizer = Tokenizer()\"\n      ],\n      \"metadata\": {\n        \"id\": \"KEZj2WrwkzZz\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 145,\n          \"referenced_widgets\": [\n            \"5319c7971f234d4bb615508f76475f9e\",\n            \"c43027a0735e459ca1f710e5a9c43177\",\n            \"c00c959249db4f2a9b97adabd2684c3c\",\n            \"d9e4edd05c1e40f991eb2c2f1fc9ebc1\",\n            \"ab4928c0a86449a384e36d8c0bc25717\",\n            \"3251223dac8f43c081701ff7f663cb35\",\n            \"d63d5559ce534b86969132d3ff8d875b\",\n            \"59618f021fc4495e9c401a421d28d4a0\",\n            \"c56ba935682647dca4bdcc593fe0d2cc\",\n            \"01808c7fec8447368d60a33b2d683851\",\n            \"f9466e0349c84633a0fb8ceeffa2a984\",\n            \"10ee9777b41e42129e2c9cc9327ad88f\",\n            \"cb6a647757244da3941602127ec38ccb\",\n            \"a64a223312144f2f9736729b63ab1ce5\",\n            \"7e7bce13eeed41179e4e15fc7afc89d5\",\n            \"7bacd13c23cf415fa5d58e9243c4a785\",\n            \"9fe6e1167e5d45fbad2adab3d59e017d\",\n            \"655c507d8fcf423f8bd6746201f569ae\",\n            \"8c4812afaaec4d65bf84a1e77840d356\",\n            \"64dfa71e3dff4236908e0592e4f90250\",\n            \"ec8d98c1edb148d3ae1c518b61e8155b\",\n            \"a8212828565d4c9884d44fb45dc51ee5\",\n            \"9a2d4d7da3024cc0828b1a6dafd0dd16\",\n            \"fe4daa4d7d024187aa2f622dbf3577a8\",\n            \"8380bf9a899645e8aef576e640b41ea2\",\n            \"37c593d2f442497483cd0026498bab05\",\n            \"3cc7b132c94f427ba44858e4c4ce3019\",\n            \"d2aed3c0f95b4677bd6e949a4ed0403e\",\n            \"bbc52e0e0b2f4758bd7d6cf44b4670ae\",\n            \"8513d262d8764d99aa5d3f2f178b875e\",\n            \"57984bbd46a84a7bb2b7629e6b2f9ef9\",\n            \"b886e2b6bbcd46cf806ff3a0b3cb8d33\",\n            \"21ac91113f7e4548b416a32b1b3f66a9\",\n            \"f8958c6de2394fecab9f95388a365431\",\n            \"11a8a4b2d39d4ea8904c0f1b2f6dd906\",\n            \"9657501af7514a60b30fcd60a223980c\",\n            \"61274b2bac5e4835a8bd33dc201bc155\",\n            \"973300b095554b10ac290244772e0a6f\",\n            \"c3f5f56bb14d44b6a5775a77f6763b94\",\n            \"fbf430940c8a49949953155b57d07766\",\n            \"ab05641bcb9c49aab977110fab503a78\",\n            \"4f11a71d7df943e48ac9ea3bab5c6771\",\n            \"b6004e09152045e18503cf75e32d4fa6\",\n            \"590fb707d26948b5b9c8bb3b896f29e1\"\n          ]\n        },\n        \"outputId\": \"466854ba-7fa2-4154-ada2-391626146c95\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"display_data\",\n          \"data\": {\n            \"application/vnd.jupyter.widget-view+json\": {\n              \"model_id\": \"5319c7971f234d4bb615508f76475f9e\",\n              \"version_minor\": 0,\n              \"version_major\": 2\n            },\n            \"text/plain\": [\n              \"Downloading:   0%|          | 0.00/373k [00:00<?, ?B/s]\"\n            ]\n          },\n          \"metadata\": {}\n        },\n        {\n          \"output_type\": \"display_data\",\n          \"data\": {\n            \"application/vnd.jupyter.widget-view+json\": {\n              \"model_id\": \"10ee9777b41e42129e2c9cc9327ad88f\",\n              \"version_minor\": 0,\n              \"version_major\": 2\n            },\n            \"text/plain\": [\n              \"Downloading:   0%|          | 0.00/112 [00:00<?, ?B/s]\"\n            ]\n          },\n          \"metadata\": {}\n        },\n        {\n          \"output_type\": \"display_data\",\n          \"data\": {\n            \"application/vnd.jupyter.widget-view+json\": {\n              \"model_id\": \"9a2d4d7da3024cc0828b1a6dafd0dd16\",\n              \"version_minor\": 0,\n              \"version_major\": 2\n            },\n            \"text/plain\": [\n              \"Downloading:   0%|          | 0.00/239 [00:00<?, ?B/s]\"\n            ]\n          },\n          \"metadata\": {}\n        },\n        {\n          \"output_type\": \"display_data\",\n          \"data\": {\n            \"application/vnd.jupyter.widget-view+json\": {\n              \"model_id\": \"f8958c6de2394fecab9f95388a365431\",\n              \"version_minor\": 0,\n              \"version_major\": 2\n            },\n            \"text/plain\": [\n              \"Downloading:   0%|          | 0.00/175 [00:00<?, ?B/s]\"\n            ]\n          },\n          \"metadata\": {}\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [Speed test] Batch 64\"\n      ],\n      \"metadata\": {\n        \"id\": \"BGsIitOkCCLE\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"speed_test(model.encode_image, lambda: torch.randint(1, 255, (64, 3, 224, 224)).to(device))\"\n      ],\n      \"metadata\": {\n        \"id\": \"5Ii9OlgUjR9J\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"21f1c03c-3e45-4650-d892-be2d83021d21\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"0.011787748336791993\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 7\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"speed_test(model.encode_text,\\n\",\n        \"           lambda: (torch.randint(1, 255, (64, 77)).to(device),\\n\",\n        \"                    torch.randint(0, 2, (64, 77)).to(device)),\\n\",\n        \"           is_text=True)\"\n      ],\n      \"metadata\": {\n        \"id\": \"3Ho_rGd6j0_8\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"1026df31-ea4b-4f50-e76d-f300bee0299a\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"0.004021787643432617\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 8\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Prepare functions\"\n      ],\n      \"metadata\": {\n        \"id\": \"81uWLBrMkl3T\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from PIL import Image\\n\",\n        \"import numpy as np\"\n      ],\n      \"metadata\": {\n        \"id\": \"ry5BqVbzk-gM\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"# batch first\\n\",\n        \"image = transforms(Image.open(\\\"CLIP.png\\\")).unsqueeze(0).cpu() # [1, 3, 224, 224]\\n\",\n        \"\\n\",\n        \"# batch first\\n\",\n        \"texts = ['диаграмма', 'собака', 'кошка']\\n\",\n        \"text_tokens, attention_mask = tokenizer.tokenize(texts, max_len=77)\\n\",\n        \"text_tokens, attention_mask = text_tokens.cpu(), attention_mask.cpu() # [3, 77]\\n\",\n        \"\\n\",\n        \"# batch second\\n\",\n        \"dummy_input_text = torch.stack([text_tokens, attention_mask]).detach().cpu()\"\n      ],\n      \"metadata\": {\n        \"id\": \"jyE4C7nIkT_5\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"text_tokens_onnx = text_tokens.detach().cpu().numpy().astype(np.int64)\\n\",\n        \"attention_mask_onnx = attention_mask.detach().cpu().numpy().astype(np.int64)\\n\",\n        \"\\n\",\n        \"image_onnx = image.detach().cpu().numpy().astype(np.float32)\\n\",\n        \"text_onnx = torch.stack([text_tokens, attention_mask]).detach().cpu()\\\\\\n\",\n        \"                                                    .numpy().astype(np.int64)\"\n      ],\n      \"metadata\": {\n        \"id\": \"9SJJmuuWlSjS\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## Convert RuCLIP model to ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"Y7V4BjOGkRcu\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"class Textual(torch.nn.Module):\\n\",\n        \"    def __init__(self, model):\\n\",\n        \"        super().__init__()\\n\",\n        \"        self.model = model\\n\",\n        \"\\n\",\n        \"    def forward(self, input_data):\\n\",\n        \"        input_ids, attention_mask = input_data\\n\",\n        \"        x = self.model.transformer(input_ids=input_ids, attention_mask=attention_mask)\\n\",\n        \"        x = x.last_hidden_state[:, 0, :]\\n\",\n        \"        x = self.model.final_ln(x)\\n\",\n        \"        return x\"\n      ],\n      \"metadata\": {\n        \"id\": \"HzGiuIo8m341\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"from clip_onnx import clip_onnx\\n\",\n        \"from clip_onnx.utils import DEFAULT_EXPORT\\n\",\n        \"\\n\",\n        \"visual_path = \\\"clip_visual.onnx\\\"\\n\",\n        \"textual_path = \\\"clip_textual.onnx\\\"\\n\",\n        \"\\n\",\n        \"textual_export_params = DEFAULT_EXPORT.copy()\\n\",\n        \"textual_export_params[\\\"dynamic_axes\\\"] = {'input': {1: 'batch_size'},\\n\",\n        \"                                         'output': {0: 'batch_size'}}\\n\",\n        \"\\n\",\n        \"onnx_model = clip_onnx(model.cpu(), visual_path=visual_path, textual_path=textual_path)\\n\",\n        \"onnx_model.convert2onnx(image, dummy_input_text, verbose=True,\\n\",\n        \"                        textual_wrapper=Textual,\\n\",\n        \"                        textual_export_params=textual_export_params)\"\n      ],\n      \"metadata\": {\n        \"id\": \"k5eQK8gJla5a\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"09ec9b1d-70f0-4d01-87be-5bb622c14e89\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start convert visual model\\n\",\n            \"[CLIP ONNX] Start check visual model\\n\",\n            \"[CLIP ONNX] Start convert textual model\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stderr\",\n          \"text\": [\n            \"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:7: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).\\n\",\n            \"  import sys\\n\"\n          ]\n        },\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"[CLIP ONNX] Start check textual model\\n\",\n            \"[CLIP ONNX] Models converts successfully\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [ONNX] CUDA inference mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"QQ0A0gUFzQr-\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"# Optional cell, can be skipped\\n\",\n        \"\\n\",\n        \"visual_path = \\\"clip_visual.onnx\\\"\\n\",\n        \"textual_path = \\\"clip_textual.onnx\\\"\\n\",\n        \"\\n\",\n        \"onnx_model.load_onnx(visual_path,\\n\",\n        \"                     textual_path,\\n\",\n        \"                     29.9119) # model.logit_scale.exp()\"\n      ],\n      \"metadata\": {\n        \"id\": \"YR-Pv3E8q_mz\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"# ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']\\n\",\n        \"onnx_model.start_sessions(providers=[\\\"CUDAExecutionProvider\\\"]) # cuda mode\"\n      ],\n      \"metadata\": {\n        \"id\": \"J2qxXvmfo2eu\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_model.visual_session.get_providers()\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"yq05H9f7vyQy\",\n        \"outputId\": \"2c39c48b-db02-4610-addd-901429497a43\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"['CUDAExecutionProvider', 'CPUExecutionProvider']\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 16\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [Speed test] Batch 64\"\n      ],\n      \"metadata\": {\n        \"id\": \"EieJHr_CA2ui\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"speed_test(onnx_model.encode_image,\\n\",\n        \"           lambda: np.random.uniform(1, 255, (64, 3, 224, 224))\\\\\\n\",\n        \"                                                .astype(np.float32))\"\n      ],\n      \"metadata\": {\n        \"id\": \"kyF8lyTXnwCz\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"d675b77b-7979-44e6-f7d9-45013a1b17b8\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"0.28517956733703614\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 17\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"speed_test(onnx_model.encode_text,\\n\",\n        \"           lambda: np.stack([np.random.randint(1, 255, (64, 77)),\\n\",\n        \"                             np.random.randint(0, 2, (64, 77))]))\"\n      ],\n      \"metadata\": {\n        \"id\": \"AmShwsCtoYte\",\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"outputId\": \"9cd94020-d813-4ddd-cd74-cb6d7d922930\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/plain\": [\n              \"0.012344837188720703\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 18\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"markdown\",\n      \"source\": [\n        \"## [Speed test] Compare Pytorch and ONNX\"\n      ],\n      \"metadata\": {\n        \"id\": \"zejMPUDCB2Mi\"\n      }\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import random\\n\",\n        \"import torch\\n\",\n        \"import time\\n\",\n        \"\\n\",\n        \"def set_seed():\\n\",\n        \"    torch.manual_seed(12)\\n\",\n        \"    torch.cuda.manual_seed(12)\\n\",\n        \"    np.random.seed(12)\\n\",\n        \"    random.seed(12)\\n\",\n        \"\\n\",\n        \"    torch.backends.cudnn.deterministic=True\"\n      ],\n      \"metadata\": {\n        \"id\": \"HqLSjsiGCJXW\"\n      },\n      \"execution_count\": null,\n      \"outputs\": []\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"n = 20\\n\",\n        \"model = model.to(device)\\n\",\n        \"\\n\",\n        \"clip_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"\\n\",\n        \"onnx_results = {\\\"encode_image\\\": [],\\n\",\n        \"                \\\"encode_text\\\": []}\\n\",\n        \"                \\n\",\n        \"for batch in [2, 8, 16, 32, 64]:\\n\",\n        \"    set_seed()\\n\",\n        \"    result = speed_test(onnx_model.encode_image,\\n\",\n        \"                        lambda: np.random.uniform(1, 255, (batch, 3, 224, 224))\\\\\\n\",\n        \"                        .astype(np.float32), n=n)\\n\",\n        \"    result = round(result, 3)\\n\",\n        \"    onnx_results[\\\"encode_image\\\"].append([batch, result])\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_image\\\", result)\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        result = speed_test(model.encode_image,\\n\",\n        \"                            lambda: torch.randint(1, 255, (batch, 3, 224, 224))\\\\\\n\",\n        \"                            .to(device), n=n)\\n\",\n        \"        result = round(result, 3)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_image\\\", result)\\n\",\n        \"    clip_results[\\\"encode_image\\\"].append([batch, result])\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    result = speed_test(onnx_model.encode_text,\\n\",\n        \"                        lambda: np.stack([np.random.randint(1, 255, (batch, 77)),\\n\",\n        \"                                          np.random.randint(0, 2, (batch, 77))]),\\n\",\n        \"                        n=n)\\n\",\n        \"    result = round(result, 3)\\n\",\n        \"    onnx_results[\\\"encode_text\\\"].append([batch, result])\\n\",\n        \"    print(\\\"onnx\\\", batch, \\\"encode_text\\\", result)\\n\",\n        \"\\n\",\n        \"    set_seed()\\n\",\n        \"    with torch.inference_mode():\\n\",\n        \"        result = speed_test(model.encode_text,\\n\",\n        \"                            lambda: (torch.randint(1, 255, (batch, 77)).to(device),\\n\",\n        \"                                     torch.randint(0, 2, (batch, 77)).to(device)),\\n\",\n        \"                            is_text=True, n=n)\\n\",\n        \"        result = round(result, 3)\\n\",\n        \"    print(\\\"torch\\\", batch, \\\"encode_text\\\", result)\\n\",\n        \"    clip_results[\\\"encode_text\\\"].append([batch, result])\\n\",\n        \"\\n\",\n        \"    print(\\\"-\\\" * 78)\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"YILIR6qMB_eb\",\n        \"outputId\": \"95e2c9a0-26bb-4203-f0e5-50589f44ddaf\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"onnx 2 encode_image 0.011\\n\",\n            \"torch 2 encode_image 0.018\\n\",\n            \"onnx 2 encode_text 0.001\\n\",\n            \"torch 2 encode_text 0.003\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 8 encode_image 0.035\\n\",\n            \"torch 8 encode_image 0.01\\n\",\n            \"onnx 8 encode_text 0.002\\n\",\n            \"torch 8 encode_text 0.003\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 16 encode_image 0.07\\n\",\n            \"torch 16 encode_image 0.01\\n\",\n            \"onnx 16 encode_text 0.004\\n\",\n            \"torch 16 encode_text 0.003\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 32 encode_image 0.145\\n\",\n            \"torch 32 encode_image 0.012\\n\",\n            \"onnx 32 encode_text 0.007\\n\",\n            \"torch 32 encode_text 0.004\\n\",\n            \"------------------------------------------------------------------------------\\n\",\n            \"onnx 64 encode_image 0.294\\n\",\n            \"torch 64 encode_image 0.013\\n\",\n            \"onnx 64 encode_text 0.014\\n\",\n            \"torch 64 encode_text 0.005\\n\",\n            \"------------------------------------------------------------------------------\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"import pandas as pd\\n\",\n        \"\\n\",\n        \"pd.DataFrame({\\\"backend\\\": [\\\"onnx\\\", \\\"torch\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 2, 8, 8, 16, 16, 32, 32, 64, 64],\\n\",\n        \"              \\\"encode_image\\\": [j[1] for i in zip(onnx_results[\\\"encode_image\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_image\\\"]) for j in i],\\n\",\n        \"              \\\"encode_text\\\": [j[1] for i in zip(onnx_results[\\\"encode_text\\\"],\\n\",\n        \"                                              clip_results[\\\"encode_text\\\"]) for j in i]})\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\",\n          \"height\": 362\n        },\n        \"id\": \"WAWUKqQOGd-2\",\n        \"outputId\": \"725a771f-1b75-4e3a-afa8-7ee7c9caac1f\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"execute_result\",\n          \"data\": {\n            \"text/html\": [\n              \"\\n\",\n              \"  <div id=\\\"df-66ec7fd2-693a-4295-b7bb-d5a196f03b9b\\\">\\n\",\n              \"    <div class=\\\"colab-df-container\\\">\\n\",\n              \"      <div>\\n\",\n              \"<style scoped>\\n\",\n              \"    .dataframe tbody tr th:only-of-type {\\n\",\n              \"        vertical-align: middle;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe tbody tr th {\\n\",\n              \"        vertical-align: top;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .dataframe thead th {\\n\",\n              \"        text-align: right;\\n\",\n              \"    }\\n\",\n              \"</style>\\n\",\n              \"<table border=\\\"1\\\" class=\\\"dataframe\\\">\\n\",\n              \"  <thead>\\n\",\n              \"    <tr style=\\\"text-align: right;\\\">\\n\",\n              \"      <th></th>\\n\",\n              \"      <th>backend</th>\\n\",\n              \"      <th>batch</th>\\n\",\n              \"      <th>encode_image</th>\\n\",\n              \"      <th>encode_text</th>\\n\",\n              \"    </tr>\\n\",\n              \"  </thead>\\n\",\n              \"  <tbody>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>0</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.011</td>\\n\",\n              \"      <td>0.001</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>1</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>2</td>\\n\",\n              \"      <td>0.018</td>\\n\",\n              \"      <td>0.003</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>2</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.035</td>\\n\",\n              \"      <td>0.002</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>3</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>8</td>\\n\",\n              \"      <td>0.010</td>\\n\",\n              \"      <td>0.003</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>4</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.070</td>\\n\",\n              \"      <td>0.004</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>5</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>16</td>\\n\",\n              \"      <td>0.010</td>\\n\",\n              \"      <td>0.003</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>6</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.145</td>\\n\",\n              \"      <td>0.007</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>7</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>32</td>\\n\",\n              \"      <td>0.012</td>\\n\",\n              \"      <td>0.004</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>8</th>\\n\",\n              \"      <td>onnx</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.294</td>\\n\",\n              \"      <td>0.014</td>\\n\",\n              \"    </tr>\\n\",\n              \"    <tr>\\n\",\n              \"      <th>9</th>\\n\",\n              \"      <td>torch</td>\\n\",\n              \"      <td>64</td>\\n\",\n              \"      <td>0.013</td>\\n\",\n              \"      <td>0.005</td>\\n\",\n              \"    </tr>\\n\",\n              \"  </tbody>\\n\",\n              \"</table>\\n\",\n              \"</div>\\n\",\n              \"      <button class=\\\"colab-df-convert\\\" onclick=\\\"convertToInteractive('df-66ec7fd2-693a-4295-b7bb-d5a196f03b9b')\\\"\\n\",\n              \"              title=\\\"Convert this dataframe to an interactive table.\\\"\\n\",\n              \"              style=\\\"display:none;\\\">\\n\",\n              \"        \\n\",\n              \"  <svg xmlns=\\\"http://www.w3.org/2000/svg\\\" height=\\\"24px\\\"viewBox=\\\"0 0 24 24\\\"\\n\",\n              \"       width=\\\"24px\\\">\\n\",\n              \"    <path d=\\\"M0 0h24v24H0V0z\\\" fill=\\\"none\\\"/>\\n\",\n              \"    <path d=\\\"M18.56 5.44l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94zm-11 1L8.5 8.5l.94-2.06 2.06-.94-2.06-.94L8.5 2.5l-.94 2.06-2.06.94zm10 10l.94 2.06.94-2.06 2.06-.94-2.06-.94-.94-2.06-.94 2.06-2.06.94z\\\"/><path d=\\\"M17.41 7.96l-1.37-1.37c-.4-.4-.92-.59-1.43-.59-.52 0-1.04.2-1.43.59L10.3 9.45l-7.72 7.72c-.78.78-.78 2.05 0 2.83L4 21.41c.39.39.9.59 1.41.59.51 0 1.02-.2 1.41-.59l7.78-7.78 2.81-2.81c.8-.78.8-2.07 0-2.86zM5.41 20L4 18.59l7.72-7.72 1.47 1.35L5.41 20z\\\"/>\\n\",\n              \"  </svg>\\n\",\n              \"      </button>\\n\",\n              \"      \\n\",\n              \"  <style>\\n\",\n              \"    .colab-df-container {\\n\",\n              \"      display:flex;\\n\",\n              \"      flex-wrap:wrap;\\n\",\n              \"      gap: 12px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert {\\n\",\n              \"      background-color: #E8F0FE;\\n\",\n              \"      border: none;\\n\",\n              \"      border-radius: 50%;\\n\",\n              \"      cursor: pointer;\\n\",\n              \"      display: none;\\n\",\n              \"      fill: #1967D2;\\n\",\n              \"      height: 32px;\\n\",\n              \"      padding: 0 0 0 0;\\n\",\n              \"      width: 32px;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    .colab-df-convert:hover {\\n\",\n              \"      background-color: #E2EBFA;\\n\",\n              \"      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\\n\",\n              \"      fill: #174EA6;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert {\\n\",\n              \"      background-color: #3B4455;\\n\",\n              \"      fill: #D2E3FC;\\n\",\n              \"    }\\n\",\n              \"\\n\",\n              \"    [theme=dark] .colab-df-convert:hover {\\n\",\n              \"      background-color: #434B5C;\\n\",\n              \"      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\\n\",\n              \"      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\\n\",\n              \"      fill: #FFFFFF;\\n\",\n              \"    }\\n\",\n              \"  </style>\\n\",\n              \"\\n\",\n              \"      <script>\\n\",\n              \"        const buttonEl =\\n\",\n              \"          document.querySelector('#df-66ec7fd2-693a-4295-b7bb-d5a196f03b9b button.colab-df-convert');\\n\",\n              \"        buttonEl.style.display =\\n\",\n              \"          google.colab.kernel.accessAllowed ? 'block' : 'none';\\n\",\n              \"\\n\",\n              \"        async function convertToInteractive(key) {\\n\",\n              \"          const element = document.querySelector('#df-66ec7fd2-693a-4295-b7bb-d5a196f03b9b');\\n\",\n              \"          const dataTable =\\n\",\n              \"            await google.colab.kernel.invokeFunction('convertToInteractive',\\n\",\n              \"                                                     [key], {});\\n\",\n              \"          if (!dataTable) return;\\n\",\n              \"\\n\",\n              \"          const docLinkHtml = 'Like what you see? Visit the ' +\\n\",\n              \"            '<a target=\\\"_blank\\\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\\n\",\n              \"            + ' to learn more about interactive tables.';\\n\",\n              \"          element.innerHTML = '';\\n\",\n              \"          dataTable['output_type'] = 'display_data';\\n\",\n              \"          await google.colab.output.renderOutput(dataTable, element);\\n\",\n              \"          const docLink = document.createElement('div');\\n\",\n              \"          docLink.innerHTML = docLinkHtml;\\n\",\n              \"          element.appendChild(docLink);\\n\",\n              \"        }\\n\",\n              \"      </script>\\n\",\n              \"    </div>\\n\",\n              \"  </div>\\n\",\n              \"  \"\n            ],\n            \"text/plain\": [\n              \"  backend  batch  encode_image  encode_text\\n\",\n              \"0    onnx      2         0.011        0.001\\n\",\n              \"1   torch      2         0.018        0.003\\n\",\n              \"2    onnx      8         0.035        0.002\\n\",\n              \"3   torch      8         0.010        0.003\\n\",\n              \"4    onnx     16         0.070        0.004\\n\",\n              \"5   torch     16         0.010        0.003\\n\",\n              \"6    onnx     32         0.145        0.007\\n\",\n              \"7   torch     32         0.012        0.004\\n\",\n              \"8    onnx     64         0.294        0.014\\n\",\n              \"9   torch     64         0.013        0.005\"\n            ]\n          },\n          \"metadata\": {},\n          \"execution_count\": 21\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"onnx_df = pd.DataFrame({\\\"ONNX\\\": [\\\"RuCLIPtiny\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in onnx_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in onnx_results[\\\"encode_text\\\"]]})\\n\",\n        \"onnx_df[\\\"total\\\"] = onnx_df[\\\"encode_image\\\"] + onnx_df[\\\"encode_text\\\"]\\n\",\n        \"\\n\",\n        \"print(onnx_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"ol9_RiUoG34e\",\n        \"outputId\": \"82be9e0e-b92e-4e3c-8132-9269eb22a41d\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| ONNX       |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:-----------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| RuCLIPtiny |       2 |          0.011 |         0.001 |   0.012 |\\n\",\n            \"| RuCLIPtiny |       8 |          0.035 |         0.002 |   0.037 |\\n\",\n            \"| RuCLIPtiny |      16 |          0.07  |         0.004 |   0.074 |\\n\",\n            \"| RuCLIPtiny |      32 |          0.145 |         0.007 |   0.152 |\\n\",\n            \"| RuCLIPtiny |      64 |          0.294 |         0.014 |   0.308 |\\n\"\n          ]\n        }\n      ]\n    },\n    {\n      \"cell_type\": \"code\",\n      \"source\": [\n        \"clip_df = pd.DataFrame({\\\"TORCH\\\": [\\\"RuCLIPtiny\\\"] * 5,\\n\",\n        \"              \\\"batch\\\": [2, 8, 16, 32, 64],\\n\",\n        \"              \\\"encode_image\\\": [i[1] for i in clip_results[\\\"encode_image\\\"]],\\n\",\n        \"              \\\"encode_text\\\": [i[1] for i in clip_results[\\\"encode_text\\\"]]})\\n\",\n        \"clip_df[\\\"total\\\"] = clip_df[\\\"encode_image\\\"] + clip_df[\\\"encode_text\\\"]\\n\",\n        \"print(clip_df.to_markdown(index=False))\"\n      ],\n      \"metadata\": {\n        \"colab\": {\n          \"base_uri\": \"https://localhost:8080/\"\n        },\n        \"id\": \"qw8ZK9XeG4LY\",\n        \"outputId\": \"326b24f9-9d21-47ed-d62c-d7594e786b96\"\n      },\n      \"execution_count\": null,\n      \"outputs\": [\n        {\n          \"output_type\": \"stream\",\n          \"name\": \"stdout\",\n          \"text\": [\n            \"| TORCH      |   batch |   encode_image |   encode_text |   total |\\n\",\n            \"|:-----------|--------:|---------------:|--------------:|--------:|\\n\",\n            \"| RuCLIPtiny |       2 |          0.018 |         0.003 |   0.021 |\\n\",\n            \"| RuCLIPtiny |       8 |          0.01  |         0.003 |   0.013 |\\n\",\n            \"| RuCLIPtiny |      16 |          0.01  |         0.003 |   0.013 |\\n\",\n            \"| RuCLIPtiny |      32 |          0.012 |         0.004 |   0.016 |\\n\",\n            \"| RuCLIPtiny |      64 |          0.013 |         0.005 |   0.018 |\\n\"\n          ]\n        }\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "requirements.txt",
    "content": "torch==1.13.1\nonnxruntime>=1.11.1\nonnx>=1.11.0\n"
  },
  {
    "path": "setup.py",
    "content": "import os\nimport pkg_resources\nfrom setuptools import setup, find_packages\n\nwith open(\"requirements.txt\", \"r\") as f:\n    install_requires = f.read().split(\"\\n\")\n\nsetup(\n    name=\"clip_onnx\",\n    version=\"1.2\",\n    py_modules=[\"clip_onnx, clip\"],\n    description=\"\",\n    author=\"Maxim Gerasimov\",\n    packages=find_packages(),\n    install_requires=install_requires,\n    include_package_data=True\n)\n"
  }
]